WO2019196878A1 - 一种内存管理的方法以及相关设备 - Google Patents

一种内存管理的方法以及相关设备 Download PDF

Info

Publication number
WO2019196878A1
WO2019196878A1 PCT/CN2019/082098 CN2019082098W WO2019196878A1 WO 2019196878 A1 WO2019196878 A1 WO 2019196878A1 CN 2019082098 W CN2019082098 W CN 2019082098W WO 2019196878 A1 WO2019196878 A1 WO 2019196878A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
terminal device
application scenario
application
contiguous
Prior art date
Application number
PCT/CN2019/082098
Other languages
English (en)
French (fr)
Inventor
李刚
唐城开
韦行海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019196878A1 publication Critical patent/WO2019196878A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • G06F12/0269Incremental or concurrent garbage collection, e.g. in real-time systems
    • G06F12/0276Generational garbage collection

Definitions

  • the present application relates to the field of computers, and in particular, to a method for memory management and related devices.
  • Fragmentation of physical memory that is, memory pages are not continuous, has always been one of the important issues facing the operating system, and most of the memory used by the general application at runtime needs to be contiguous memory.
  • the prior art usually uses memory management algorithms, such as the Buddy system defragmentation algorithm in Linux, to organize the fragmentation in memory into contiguous memory to meet The memory requirements of the application.
  • the existing memory management algorithms are mainly divided into two categories: synchronous memory defragmentation algorithm and asynchronous memory defragmentation algorithm.
  • the synchronous memory defragmentation algorithm triggers memory defragmentation in the process of allocating memory for the application if the available contiguous memory of the system cannot meet the application requirements.
  • the asynchronous memory defragmentation algorithm triggers memory defragmentation when the system's available contiguous memory falls below a set threshold.
  • the existing memory management algorithm passively triggers memory defragmentation based on specific events.
  • the synchronous memory defragmentation algorithm triggers defragmentation when the system cannot allocate contiguous memory for the current application. Therefore, it is necessary to wait for the system to release the memory and sort out the contiguous memory to complete the memory allocation, which greatly increases the waiting time of the memory allocation, affecting the current The operational efficiency of the application.
  • the asynchronous memory defragmentation algorithm triggers memory defragmentation when the system's available contiguous memory is lower than the preset threshold, and stops when it is above the threshold.
  • the application requires a large amount of contiguous memory
  • asynchronous defragmentation cannot meet the application's memory requirements in time. Therefore, entering the synchronous memory defragmentation also causes a problem of long memory allocation time, which affects the running efficiency of the application.
  • the present invention provides a memory management method and related equipment, and actively performs memory defragmentation based on application scenarios and continuous memory demand prediction to meet the requirements of continuous memory in different application scenarios, reduce waiting time of memory allocation, and improve application operation efficiency. .
  • the first aspect of the present application provides a method for memory management, which may include:
  • the terminal device acquires a handover probability of switching from the currently running first application scenario to each of the one or more second application scenarios, where the multiple is two or more; then the terminal device Determining target continuous memory according to the switching probability that satisfies the preset condition and the continuous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios; if available on the terminal device
  • the contiguous memory is smaller than the target contiguous memory, and the terminal device performs memory fragmentation according to the target contiguous memory before the terminal device switches from the first application scenario to any one of the one or more second application scenarios. Finishing so that the contiguous memory available on the terminal device is greater than the target contiguous memory.
  • the contiguous memory available on the terminal device is equal to the target contiguous memory, and the memory defragmentation may be performed, or the memory defragmentation may not be performed, and the specific design is adjusted according to actual design requirements, which is not limited herein.
  • the application scenario that the terminal device is about to switch is predicted, and the handover probability is switched from the first application scenario to the second application scenario, and the handover probability and the corresponding number are met according to the preset condition.
  • the contiguous memory required by the application scenario determines the target contiguous memory, and then the terminal device actively performs the memory defragmentation, so that the available contiguous memory on the terminal device is larger than the target contiguous memory, so as to ensure that the terminal device has sufficient continuity when switching the application scenario.
  • the memory is allocated, and the defragmentation of the memory is not required when the terminal device switches the application scenario, thereby improving the efficiency of the terminal device when switching the application scenario.
  • the terminal device performs memory defragmentation, which may include:
  • the terminal device acquires the system load, and determines the memory defragmentation algorithm according to the range of the system load; and then performs memory defragmentation on the memory of the terminal device according to the memory defragmentation algorithm to improve the contiguous memory available on the terminal device.
  • the memory defragmentation algorithm may be determined by the range of the system load of the terminal device, and the memory defragmentation algorithm may be dynamically adjusted according to the system load of the terminal device, so that the memory resources of the terminal device can be reasonably utilized and reduced. The effect of memory defragmentation on the application scenario of the terminal device to improve the efficiency of the terminal device switching application scenario.
  • the terminal device determines a memory defragmentation algorithm according to a range in which the system load is located, including:
  • the terminal device determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm; if the system load is in the second preset range, the terminal device determines that the memory defragmentation algorithm is The medium memory defragmentation algorithm; or if the system load is in the third preset range, the terminal device determines that the memory defragmentation algorithm is a mild memory defragmentation algorithm.
  • the appropriate memory defragmentation algorithm is determined according to the system load of the terminal device, and the memory defragmentation algorithm is dynamically adjusted, including a deep memory defragmentation algorithm, a moderate memory defragmentation algorithm, and a mild memory defragmentation algorithm.
  • the memory defragmentation process reduces the impact on the running application scenarios of the terminal device, clears out the available contiguous memory of the terminal device, and improves the efficiency of the terminal device switching application scenarios. For example, if the load of the terminal device is high, the memory defragmentation can be performed by a light memory defragmentation algorithm to avoid affecting the running application scenario on the terminal device; or when the load of the terminal device is low, the deep memory can be used.
  • the defragmentation algorithm performs memory defragmentation to properly utilize the resources of the terminal device without affecting the operation of other processes or application scenarios on the terminal device.
  • the first implementation manner of the first aspect of the application, or the second implementation manner of the first aspect of the application, in the third implementation manner of the first aspect of the application the acquiring the terminal device from the The switching probability of an application scenario switching to one or more second application scenarios may include:
  • the terminal device first acquires the number of historical handovers that are switched from the first application scenario to the one or more second application scenarios, and then calculates a handover to each of the second application scenarios according to the historical number of times. Probability.
  • the switching probability can be used to predict the contiguous memory required to switch the application scenario, and the memory defragmentation is actively performed according to the contiguous memory, so that the contiguous memory available on the terminal device satisfies the contiguous memory required for switching the application scenario, and the terminal device switching application is improved. The efficiency of the scene.
  • the first embodiment of the first aspect of the present application, or any one of the third embodiment of the first aspect of the present application, in the fourth embodiment of the first aspect of the present application Determining the target continuous memory according to the switching probability that the preset condition is met, and the continuous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios. include:
  • the terminal device Determining, by the terminal device, the second application scenario that the switching probability is greater than a threshold from the one or more second application scenarios; the terminal device determining the target contiguous memory according to the contiguous memory required by the second application scenario that the switching probability is greater than the threshold .
  • the second application scenario whose switching probability is not greater than the threshold is filtered out, and then the target continuous memory is determined according to the continuous memory required by the second application scenario whose switching probability is greater than the threshold, and the terminal device is organized to be larger than the target continuous.
  • the terminal device is configured according to the second scenario.
  • the one or more consecutive memories whose switching probability is greater than the preset second application scenario determine the target contiguous memory, and may include:
  • the terminal device performs a weighting operation on the switching probability of each second application scenario and the required contiguous memory in the second application scenario in which the multiple switching probabilities are greater than the threshold to obtain the target contiguous memory.
  • the weight of the weighting operation may have a corresponding relationship with the switching probability of each second application scenario in the second application scenario in which the multiple switching probabilities are greater than the threshold. For example, the higher the handover probability, the greater the weight.
  • the switching probability of each second application scenario in the second application scenario with the handover probability greater than the threshold and the required contiguous memory may be weighted to make the target contiguous memory closer to the terminal device.
  • the terminal device determines the target continuous memory according to the contiguous memory required by the one or more second application scenarios whose switching probability is greater than a threshold, including:
  • the terminal device determines, from the second application scenario that the one or more handover probabilities are greater than the threshold, the target application scenario that requires the largest contiguous memory; and uses the contiguous memory required by the target application scenario as the target contiguous memory.
  • the maximum continuous memory required in the second application scenario in which one or more handover probabilities are greater than the threshold may be used as the target continuous memory, so as to satisfy each of the second application scenarios in which the handover probability is greater than the threshold.
  • the contiguous memory required by the second application scenario improves the efficiency of the terminal device switching the application scenario.
  • the method may further include:
  • the terminal device sorts the memory fragments of the terminal device by a light memory defragmentation algorithm.
  • the terminal device when the terminal device may have unexpected continuous memory consumption, or the available continuous memory on the terminal device is lower than the target continuous memory, the terminal device may also perform light memory fragmentation when switching to the second application scenario. Finishing and quickly sorting out the available contiguous memory to ensure that the terminal device can switch the application scenario normally.
  • a second aspect of the embodiment of the present application provides a terminal device having a function of implementing the method for memory management of the first aspect described above.
  • This function can be implemented in hardware or in hardware by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • a third aspect of the embodiments of the present disclosure provides a terminal device, which may include:
  • a processor a memory, a bus, and an input/output interface, the processor, the memory and the input/output interface being connected through the bus; the memory for storing program code; the processor executing the application when the program code in the memory is executed.
  • a fourth aspect of the present application provides a storage medium. It should be noted that the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be implemented by software. Formally embodied, the computer software product is stored in a storage medium for storing computer software instructions for use in the device, comprising: for performing the terminal device designed in any of the first to second aspects above program.
  • the storage medium includes: a U disk, a mobile hard disk, a read only memory (English abbreviation ROM, English full name: Read-Only Memory), a random access memory (English abbreviation: RAM, English full name: Random Access Memory), a disk or a disk. And other media that can store program code.
  • a fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described in any of the alternative embodiments of the first or second aspect of the present application.
  • the terminal device may acquire the switching probability of switching from the first application scenario to each of the second or second application scenarios, and Determining the target continuous memory according to the switching probability of each second scenario in which the switching probability meets the preset condition in the one or more second application scenarios, and the contiguous memory required for starting and running each of the second application scenarios, and then at the terminal Before the device is switched from the first application scenario to the second application scenario, the terminal device actively performs memory defragmentation, so that the length of the available contiguous memory on the terminal device is greater than the length of the target contiguous memory, so as to ensure the second application to be switched.
  • the contiguous memory required by the scene improves the efficiency of the terminal device switching and running the second application scenario.
  • FIG. 1 is a schematic diagram of a specific application of an embodiment of the present application
  • FIG. 2 is a schematic diagram of a memory page in an embodiment of the present application.
  • FIG. 3 is a frame diagram of a method for memory management in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for memory management in an embodiment of the present application.
  • FIG. 5 is another schematic flowchart of a method for memory management in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a partner algorithm in an embodiment of the present application.
  • FIG. 7 is another schematic flowchart of a method for memory management in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of light memory fragmentation in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a medium memory fragmentation in the embodiment of the present application.
  • FIG. 10 is a schematic diagram of deep memory fragmentation in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a specific scenario of a method for memory management in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another embodiment of a terminal device according to an embodiment of the present application.
  • the present invention provides a memory management method and related equipment, and actively performs memory defragmentation based on an application scenario and continuous memory demand prediction to meet the requirements of continuous memory in different application scenarios, and reduce the waiting time of memory allocation. Improve application efficiency.
  • the method of the memory management device provided by the embodiment of the present application may be applied to a terminal device, which may be a mobile phone, a tablet computer, a vehicle mobile device, a PDA (personal digital assistant), a camera or a wearable device, or the like.
  • a terminal device which may be a mobile phone, a tablet computer, a vehicle mobile device, a PDA (personal digital assistant), a camera or a wearable device, or the like.
  • a terminal device may be a mobile phone, a tablet computer, a vehicle mobile device, a PDA (personal digital assistant), a camera or a wearable device, or the like.
  • PDA personal digital assistant
  • the terminal device 100 is logically divided into a hardware layer 21, an operating system 161, and an application layer 31.
  • the hardware layer 21 includes hardware resources such as an application processor 101, a microcontroller unit 103, a modem 107, a Wi-Fi module 111, a sensor 114, and a positioning module 150.
  • the application layer 31 includes one or more applications, such as an application 163, which may be any type of application such as a social application, an e-commerce application, a browser, or the like.
  • the operating system 161, as a software middleware between the hardware layer 21 and the application layer 31, is a computer program that manages and controls hardware and software resources.
  • operating system 161 includes a kernel 23, a hardware abstraction layer (HAL) 25, a library and runtime 27, and a framework 29.
  • the kernel 23 is used to provide the underlying system components and services, such as: power management, memory management, thread management, hardware drivers, etc.; hardware drivers include Wi-Fi drivers, sensor drivers, positioning module drivers, and the like.
  • the hardware abstraction layer 25 is a wrapper around the kernel driver, providing an interface to the framework 29 to mask the implementation details of the lower layers.
  • the hardware abstraction layer 25 runs in user space, while the kernel driver runs in kernel space.
  • the library and runtime 27 are also called runtime libraries, which provide the required library files and execution environment for the executable at runtime.
  • the library and runtime 27 includes an Android Runtime (ART) 271 and a library 273.
  • ART 271 is a virtual machine or virtual machine instance that can convert the application's bytecode to machine code.
  • Library 273 is a library that provides support for executable programs at runtime, including browser engines (such as webkits), script execution engines (such as JavaScript engines), graphics processing engines, and the like.
  • the framework 27 is used to provide various basic common components and services for applications in the application layer 31, such as window management, location management, and the like.
  • the framework 27 can include a phone manager 291, a resource manager 293, a location manager 295, and the like.
  • the functions of the various components of the operating system 161 described above may be implemented by the application processor 101 executing a program stored in the memory 105.
  • the terminal 100 may include fewer or more components than those shown in FIG. 1.
  • the terminal device shown in FIG. 1 includes only a plurality of implementations disclosed in the embodiments of the present application. component.
  • a plurality of applications can be installed on the terminal device, and can be switched between multiple applications, or in multiple scenarios of one application, such as different functions of the application, or Switch between the interface and so on.
  • the terminal device performs application scenario switching, including switching between multiple applications, or switching between multiple scenarios within one application
  • the terminal device memory involves the operation of multiple modules, including memory allocation, and memory reading. Write and so on. Continuous running of memory is required for each application scenario to start and run.
  • an application such as a browser, a shopping software, and a game software is installed on the terminal device.
  • the terminal device can switch between multiple application scenarios, including switching between multiple applications installed on the terminal device, or between various scenarios within the application, such as functions or user interfaces. For example, switching from a browser to a shopping software, or switching from a shopping software to a photographing software, or switching from a photographing scene in a photographing software to a photo preview scene.
  • the terminal device performs the switching of the application scenario, the terminal device also involves a change in the memory, and each application scenario needs memory in its startup and operation.
  • the memory can also include multiple ways of dividing. Specifically, the Linux system is used as an example.
  • the physical memory divides the memory by a fixed page, and can be divided into multiple memory pages, and the size of one memory page can be 4 kb.
  • the general application scenario requires continuous memory, that is, continuous memory pages.
  • Memory pages can be divided into removable memory pages, non-removable memory pages, and recyclable pages.
  • the removable memory page can be moved at will, and the data stored in the removable memory page can be moved to other memory pages.
  • the pages occupied by the general user space application belong to the movable page, and the application and memory pages can be mapped through the page. Therefore, you only need to update the page table entry to copy the data stored on the original memory page to the target memory page.
  • a memory page may also be shared by multiple processes, corresponding to multiple page table entries.
  • the physical memory after long-term memory allocation and release, forms part of the non-movable page, that is, it is fixed in the memory and cannot be moved to other places.
  • Most of the pages allocated by the core kernel are non-removable pages. As the system runs longer, the non-removable pages will also increase. Recyclable pages cannot be moved directly, but can be reclaimed, including when the application rebuilds data from another memory page, and the data on the original memory page can be reclaimed.
  • Generally recyclable pages can be reclaimed by the system's preset memory reclamation process. For example, the memory occupied by the data of the mapping file may belong to a recyclable page, and the kswapd process in the Linux system may periodically recycle the recyclable page according to preset rules.
  • a piece of memory can include a variety of memory pages, including free memory pages, reclaimable memory pages, non-removable memory pages, and removable memory pages. Therefore, in order to ensure the normal operation of the application scenario on the terminal device, memory defragmentation is required to obtain continuous free memory.
  • memory defragmentation is a process of reducing the amount of memory fragmentation. The memory defragmentation mainly includes moving the removable memory page, or reclaiming or removing the recyclable page to obtain contiguous free memory of the physical address.
  • the embodiment of the present application improves the operating system portion by taking the terminal device described in the foregoing figure as an example.
  • the operating system may be related to the memory part and the part required for the application to run, and also relates to the application layer part inside the terminal device, specifically related to the application switching process.
  • each of the functional modules described in FIG. 1 is only a partial module.
  • the terminal device may include multiple modules related to the memory, and related to the application running and the application switching, which are not limited herein.
  • the specific improvement provided by the method of the memory management in the embodiment of the present application may include predicting the application scenario to be switched and the required continuous memory before performing the application scenario switching, and actively performing memory defragmentation to sort out sufficient data. Continuous memory.
  • the terminal device can allocate sufficient continuous memory for the application scenario when switching the application scenario.
  • the framework of the method for memory management in the embodiment of the present application is as shown in FIG. 3.
  • the terminal device can predict the target continuous memory required for the application scenario to be switched by calculating or learning before the application starts or performs the application scenario switching. Then, the terminal device performs memory defragmentation according to the target contiguous memory, and sorts out contiguous memory that can be used, so that the available contiguous memory of the terminal device is larger than the target contiguous memory.
  • the application scenario in the embodiments of the present application may be an application on the terminal device, or a scenario in an application on the terminal device, such as an application function, a user interface, and the like. That is, the switching of the application scenario in the application may be the switching between the applications on the terminal device, or the scenario switching within an application on the terminal device, which is not limited herein.
  • the contiguous memory is allocated for the application scenario to be switched, so that the terminal device meets the contiguous memory required by the application scenario, so that the application scenario can run normally.
  • the memory management method provided by the present application is to ensure the continuous memory requirement of the application scenario to be switched, and the terminal device can predict the target continuous memory required for the application scenario to be switched before the terminal device performs the application scenario switching, and then perform memory fragmentation. sort out. Therefore, in the embodiment of the present application, the contiguous memory required for the application scenario to be switched may be collated before the application device performs the application scenario switching, so as to ensure the continuous memory required for the application scenario to be switched, and improve the application scenario switching. s efficiency.
  • the flow of the method for memory management in the embodiment of the present application is as shown in FIG. 4 , and the flowchart of the method for memory management of the present application includes:
  • One or more applications may be included on the terminal device, that is, two or more. Multiple application scenarios can be included in each application. When the terminal device is running, it can be switched in the multiple application scenarios. If the terminal device is currently running the first application scenario, the terminal device may obtain a handover probability from the first application scenario to the second application scenario of the one or more second application scenarios, and the manner of obtaining the handover probability may be According to the number of times of application scenario switching or deep learning on the terminal device. For example, if the terminal device is currently running a browser, that is, the first application scenario, the terminal device may acquire, from the browser, switch to each second application scenario of the one or more second application scenarios, such as a camera, a game, a shopping software, and the like.
  • the terminal device may acquire, from the browser, switch to each second application scenario of the one or more second application scenarios, such as a camera, a game, a shopping software, and the like.
  • the probability of switching such as the probability of switching from the browser to the camera is 15%, the probability of switching to the shopping software is 23%, the probability of switching to the game is 3%, and so on.
  • the chat session scene in the WeChat is running on the terminal device, that is, the probability of switching to the WeChat red envelope is 30%, and the probability of switching to the circle of friends is 50%.
  • the first application scenario and the second application scenario may be different applications, or may be different application scenarios in different or the same application, and may be adjusted according to actual design requirements, which is not limited herein. .
  • the terminal device needs to acquire the switching probability of each application scenario in the one or more second application scenarios, and obtain the second application scenario in the one or more second application scenarios.
  • Continuous memory required For example, running a game requires 60 kb of contiguous memory, running a camera requires 500 kb of contiguous memory, and so on.
  • the contiguous memory required for each second application scenario in the second application scenario in which the handover probability is greater than the preset value is obtained.
  • step 401 may be performed first, or step 402 may be performed first, where Not limited.
  • the terminal device may calculate the target continuous memory according to the continuous memory required for each second application scenario in which the switching probability and the switching probability satisfy the preset condition.
  • the terminal device can perform the memory defragmentation before the terminal device switches to any one of the second application scenarios, so that the contiguous memory available on the terminal device is larger than the target contiguous memory, so that the terminal device can allocate the application scenario when switching the application scenario. Sufficient contiguous memory to ensure that the terminal device switches from the first application scenario to the contiguous memory required for one of the second application scenarios.
  • the method for calculating the contiguous memory of the target may be: first, determining, in the second application scenario that the switching probability is greater than the threshold, if there are at least two second application scenarios with the switching probability being greater than the threshold, the second application that the at least two switching probabilities are greater than the threshold.
  • the switching probability of each second application scenario in the scenario and the required continuous memory are weighted to obtain the target continuous memory; or the terminal device determines the maximum continuous demand from the second application scenario in which the handover probability is greater than the threshold.
  • the memory is then used as the target continuous memory, etc., and can be adjusted according to actual design requirements, which is not limited herein.
  • the terminal device determines whether the available contiguous memory is greater than the target contiguous memory, that is, determines whether the size of the contiguous memory available on the terminal device is greater than the size of the target contiguous memory. If the available contiguous memory is greater than the target contiguous memory, step 405 is performed. If the available contiguous memory is not greater than the target contiguous memory, step 406 is performed.
  • the available contiguous memory is contiguous memory that can be allocated to the second application scenario on the terminal device.
  • the contiguous memory available on the terminal device can guarantee the contiguous memory required to switch from the first application scenario to one of the second application scenarios.
  • the terminal device may perform memory defragmentation, or may not perform memory defragmentation, and may be adjusted according to actual design requirements, which is not limited herein.
  • the terminal device switches from the first application scenario to the second application scenario to ensure continuous memory required to switch from the first application scenario to the second application scenario.
  • the terminal device needs to perform memory defragmentation so that the contiguous memory available on the terminal device is not less than the target contiguous memory, so that the terminal device has sufficient contiguous memory to allocate when switching to the second application scenario.
  • the specific steps of the memory defragmentation may be: arranging the memory pages on the terminal device, moving the movable pages, or reclaiming the pages for recycling, etc., to sort out the free contiguous memory, and the free contiguous memory may be at the terminal.
  • the device performs application scenario switching, it allocates the switched application scenarios.
  • the memory defragmentation may be performed, and the memory defragmentation may not be performed, which may be adjusted according to actual design requirements. Not limited.
  • the switching probability of the terminal device switching from the first application scenario to each of the second application scenarios in the second application scenario is determined, and then the target continuous memory is obtained according to the switching probability and the required memory calculation. .
  • the memory is defragmented according to the target contiguous memory, so that the contiguous memory available on the terminal device is not less than the target contiguous memory, so as to ensure that the terminal device switches from the first application scenario to one of the one or more second application scenarios.
  • there is enough contiguous memory available is improved.
  • FIG. 5 a schematic diagram of another embodiment of the method for managing internal management in the embodiment of the present application includes:
  • the first application scenario is started.
  • the first application scenario is an application scenario in which the terminal device is currently running. After the first application scenario is started and running normally, the terminal device may next predict the application scenario to be switched, and organize enough continuous memory in advance. In an application scenario that guarantees switching, sufficient contiguous memory can be used to improve the efficiency of the terminal switching application scenarios.
  • the terminal device can collect the data of the application scenario switch, for example, the number of times the terminal device switches from the application scenario A to the application scenario B, or the number of times the application scenario A switches to the application scenario C, in the current 24 hours.
  • the specific implementation manner may be that a handover count variable is inserted in a scenario start function in the terminal device, and the application scenario is switched every time, for example, switching from a chat scene of a WeChat to a scene of a circle of friends.
  • the association relationship between the application scenarios can be determined by the number of times of switching between the application scenarios, and the application scenario association matrix can be generated, and the application scenario to be switched can be determined according to the application scenario association matrix, that is, Switch to the switching probability of the second application scenario.
  • the switching probability is calculated, for example, from application scenario A to application scenario B is 50 times, from application scenario A to application scenario C is 30 times, and from application scenario A to application scenario D is 20 times, then, if current
  • the application scenario A that is, the first application scenario, has a switching probability of 50% switching to the application scenario B, a switching probability of switching to the application scenario C of 30%, and a switching probability of switching to the application scenario D of 20%.
  • the application scenario B, the application scenario C, and the application scenario D are one or more second application scenarios in the foregoing FIG. 4 .
  • the number of times of switching from application scenario A to application scenario B is 500, and the number of times from application scenario A to application scenario C is 100.
  • the probability of switching from the application scenario A to the application scenario B is greater than the probability of switching from the application scenario A to the application scenario C.
  • the application context association matrix can be as shown in Table 1 below:
  • the application context association matrix is used to indicate the number of times the device in the device switches from one application scenario to another. For example, in the first row of Table 1, the number of times of switching from application scenario A to application scenario B is not 100, the number of times of switching to application scenario C is 10, and the number of switching to application scenario D is 20, switching to the application.
  • the number of times of the scene E is 0, and the number of times of switching to the application scene F is one, which can be used to calculate the probability of switching from the application scenario A to each of the other application scenarios.
  • the probability of the terminal device switching from the application scenario A to the application scenario B is greater than the application scenario C, the application scenario D, the application scenario E, and the application scenario F.
  • the application context of the application scenario may be updated every time the application scenario is switched, so that the application switching on the terminal device is recorded in real time.
  • the average value of the original record and the updated data may be updated, or the original record and the updated data may be updated by a weighting operation, which is not limited herein.
  • the association relationship may be updated every time the handover is performed, so that the terminal device can predict the application scenario to be switched according to more historical handover data.
  • the more the history switching data the higher the accuracy of the terminal device to perform the handover scenario prediction. Therefore, the terminal device can improve the prediction accuracy by updating the application scenario association relationship, thereby ensuring that the available continuous memory cleared by the terminal device satisfies the application scenario switching.
  • the contiguous memory requirements can identify the contiguous memory requirements required for the next application scenario.
  • the specific implementation manner may be: inserting a switch count variable into a memory allocation function in the terminal device, counting the contiguous memory allocated for each memory, and acquiring a count each time the application scenario is ready to enter, enter, and exit.
  • the continuous memory difference collected during the completion and accurate entry is the continuous memory requirement when the application scenario starts.
  • the difference between the exit and the completion is the overall continuous memory requirement of the application scenario.
  • the contiguous memory of the application scenario whose switching probability is greater than the threshold when switching from the current application scenario may be collected. For example, when switching from the current application scenario, the handover probability is greater than 10 If there are 10 application scenarios, you can collect only the contiguous memory required by the 10 application scenarios. The specific collection situation can be adjusted according to the design requirements.
  • step 502 may be performed first, or the steps may be performed first. 504, which can be adjusted according to actual design requirements, and is not limited herein.
  • the contiguous memory required for each application scenario may also be recorded, and after each contiguous memory is allocated, the contiguous memory required for each application scenario is updated.
  • the contiguous memory allocated by the current switching may be weighted with the contiguous memory allocated by the historical switching, or the contiguous memory allocated by the current switching may be used instead of the contiguous memory record allocated by the historical switching, which may be adjusted according to the actual scenario. limited.
  • the contiguous memory requirement of each second application scenario may be determined according to the collected data, for example, the contiguous memory requirement and the application scenario in the startup process of each second application scenario may be run.
  • the continuous memory requirement of the time determines the continuous memory requirement required to switch from the second application scenario to the running of the application scenario.
  • the memory fragment is managed by a buddy algorithm, and the system kernel arranges the memory pages available to the administrator in each zone, and is arranged into a linked list queue according to the power level of 2, and is stored in the free_area data.
  • FIG. 6 is a schematic diagram of a partner algorithm in the embodiment of the present application.
  • there are 16 memory pages in the system memory including memory page 0 to memory page 15, which is 0-15 in the pages row in Figure 6.
  • the 16 memory pages are arranged in a linked list queue according to the power level of 2. Since there are only 16 pages, only 4 levels can be used to determine the bitmap of the 16 memory pages, that is, order0 to order3 in FIG.
  • High-order contiguous memory can be quickly sorted out through low-order contiguous memory, and low-order contiguous memory can be quickly allocated through high-order contiguous memory. Therefore, when determining the continuous memory requirements of the application scenario, continuous memory allocation can be performed by the buddy algorithm.
  • the specific format is shown in Table 2:
  • the second application scenario to be switched may be predicted while predicting the required target contiguous memory.
  • the application scenario correlation matrix can be used to determine the switching probability of switching from the current application to the other application scenarios. You can filter the application scenario that is lower than the threshold by setting a threshold. For example, if the switching probability of the application scenario A is less than 10%, the application scenario A is filtered out.
  • the specific step of determining the target contiguous memory may be: first filtering out the second application scenario in which the switching probability is not greater than the threshold in the one or more second application scenarios. Then, the switching probability of the second application scenario with the switching probability greater than the threshold and the required continuous memory are weighted to obtain the target continuous memory. Specifically, the weight in the weighting operation may have a corresponding relationship with the switching probability corresponding to each second application scenario.
  • the weight of the application scenario with a large switching probability may be larger, that is, the obtained target continuous memory is more contiguous to the contiguous memory required for the second application scenario with a higher probability of switching; or the switching probability may be greater than
  • the maximum contiguous memory required in the second application scenario of the threshold is used as the target contiguous memory, or the target contiguous memory is obtained by other algorithms, which may be adjusted according to the actual device requirements, which is not limited herein.
  • the memory defragmentation may be actively performed to enable the contiguous memory available on the terminal device before switching to the second application scenario on the terminal device. Greater than the target contiguous memory.
  • the specific defragmentation method is explained in detail in the embodiment of FIG. 7 as follows.
  • the target continuous memory required to switch to the second application scenario is predicted in advance, and the memory defragmentation is performed in advance so that the contiguous memory available on the terminal device is larger than the target contiguous memory, so the terminal device is in the second
  • the contiguous memory can be used to start and run the second application scenario, thereby reducing the waiting time for the terminal device to switch to the second application scenario, thereby improving the efficiency of the terminal device switching the application scenario.
  • a schematic diagram of another embodiment of the method for memory management in the embodiment of the present application may include:
  • the terminal device may start memory defragmentation to sort out the available contiguous memory. Specifically, it is described in detail in the following steps 702 to 708.
  • step 702. Calculate the currently available contiguous memory. If the target contiguous memory is satisfied, go to step 703. If the target contiguous memory is not met, go to step 704.
  • the terminal device can calculate the currently available contiguous memory, that is, the contiguous memory that the terminal device can currently allocate to the second application scenario.
  • the currently available contiguous memory that is, the contiguous memory that the terminal device can currently allocate to the second application scenario.
  • all available contiguous memory on the current terminal device can be obtained from the buddy system, and it is determined whether the available contiguous memory satisfies the target contiguous memory.
  • step 704 is performed to quickly sort the memory so that the available contiguous memory on the terminal device is smaller than the target contiguous memory. If the available contiguous memory on the terminal device is not less than the target contiguous memory, the terminal device may perform the non-movable page dense area calculation, that is, step 703 is performed.
  • the non-movable page dense area can be calculated.
  • the unit range if the non-movable page exceeds the dense threshold, the unit range is considered to be a non-movable dense area. For example, if there are more than 100 non-removable pages in 1024 pages, then the 1024 pages may be considered to be non-removable page dense areas.
  • Step 704 may be performed when the non-movable page dense area is greater than the dense threshold. When the non-movable page dense area is not greater than the dense threshold, the memory defragmentation can be stopped.
  • the non-movable page dense area may be calculated, or the non-movable page dense area may not be calculated, that is, step 703 may be an optional step.
  • the non-movable page dense area can also be directly calculated. If the non-movable page dense area is larger than the preset value, the contiguous memory can be quickly sorted, that is, the memory defragmentation is performed by the light memory defragmentation algorithm, and the light memory defragmentation algorithm is described in detail in step 704.
  • non-movable page-intensive area is not greater than the preset value, you can continue to perform the memory defragmentation by the light memory defragmentation algorithm, or you can not perform the memory defragmentation, which can be adjusted according to the actual design requirements, which is not limited here.
  • the terminal device performs memory defragmentation
  • the non-movable page is directly skipped, when the system is running for a long time, the mobile page cannot be increased, and the degree of memory fragmentation is greatly improved, resulting in tidying up large contiguous memory.
  • the success rate is reduced, resulting in a slower memory consolidation and memory allocation, which will reduce the operating efficiency of the terminal device. Therefore, in the embodiment of the present application, the calculation of the non-movable page-dense area is performed, and the memory defragmentation is performed in the subsequent process, including arranging the area including the non-removable page, which is specifically described in the following steps 707 and 708.
  • the non-removable page area can be arranged to avoid the efficiency and success rate of the memory defragmentation due to the increase of the non-movable page after the system is operated for a long time, thereby improving the efficiency and success rate of the terminal device for performing memory defragmentation.
  • the terminal device When the available continuous memory on the terminal device does not satisfy the target continuous memory, or the non-movable dense area on the terminal device is greater than a preset value, the terminal device quickly sorts the continuous memory. Including memory defragmentation through a light memory defragmentation algorithm, you can defragment the removable page area. Specifically, the memory page before the finishing by the light memory defragmentation algorithm and the deflated memory page are as shown in FIG. 8 , wherein the movable page area is a memory page within the preset unit range and does not include the non-movable page. For example, if the 1024 pages do not include the non-removable page, the 1024 pages may be considered to belong to the movable page area.
  • the memory defragmentation is performed by a light memory algorithm, that is, the movable page area is sorted, and the movable pages in the movable page area are moved to a contiguous memory, so that the free pages form contiguous memory. For example, if the memory page with the movable page area address 0001-0100 includes 20 non-contiguous movable pages, then the 20 movable pages can be uniformly moved to 0001-0020, so the address is after 0020. Memory pages are free pages to organize free contiguous memory. Therefore, the idle memory can be quickly sorted out on the terminal device by the light memory defragmentation algorithm to ensure the contiguous memory required for the terminal device to perform the application scenario switching.
  • the available contiguous memory can be quickly sorted out by a light memory defragmentation algorithm to ensure that more contiguous memory can be allocated when the terminal device switches applications. For example, if the terminal device is currently running the application scenario A, if the current available continuous memory on the terminal device does not satisfy the application scenario A, or the non-removable page dense region on the terminal device is greater than the preset value, the terminal device may perform rapid finishing. Continuous memory, quickly organizes the removable page area, and quickly sorts out the available contiguous memory. In this way, the terminal device suddenly switches the application scenario and the continuous memory is insufficient, thereby improving the efficiency and reliability of the switching application scenario. If the mobile page dense area is larger than the preset value, the non-removable page on the terminal device is increased, and the continuous memory can be quickly arranged, so that more continuous memory on the terminal device can be allocated for the application scenario.
  • the contiguous memory available on the terminal device can be increased, which can prevent the contiguous memory from being insufficient when the terminal device suddenly switches the application scenario. If the contiguous memory available on the terminal device still does not meet the target contiguous memory, or to further increase the available contiguous memory on the terminal device, the memory fragmentation can be further defragmented. Specifically, the memory fragmentation algorithm can be dynamically adjusted according to the range of the system load of the terminal device, so as to reasonably utilize the resources of the terminal device and reduce the impact on the running application scenario on the terminal device. .
  • the system load can be used to indicate the degree of system busyness in the terminal device, and can be a coefficient occupied by a process that is running or waiting to run per unit time on the end device.
  • the system load can be an average of the number of processes in the running queue of the terminal device per unit time.
  • the system load of the terminal device can be queried by using preset query instructions, such as uptime, top command, and the like.
  • the system load of the terminal device can usually be expressed by the occupancy rate of the central processing unit (cpu) of the terminal device or the throughput of the input/output (io).
  • the method for determining the system load of the terminal device may be: reading a node of the CPU or io in the system, thereby acquiring the system load of the terminal device. Then, the terminal device can dynamically adjust according to the system load, that is, dynamically adjust the memory fragmentation algorithm, hierarchically implement memory defragmentation, improve the efficiency of the terminal device for performing memory defragmentation, and reduce the application scenario of performing memory defragmentation on the terminal device. Impact.
  • the terminal device determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm, that is, performs step 708; if the system load is in the second preset range, the terminal device determines the memory defragmentation.
  • the algorithm is a medium memory defragmentation algorithm, that is, step 707 is performed; or if the system load is in the third preset range, the terminal device determines that the memory defragmentation algorithm is a light memory defragmentation algorithm, that is, step 706 is performed.
  • the deep memory defragmentation algorithm does not affect the running of the application scenario or other application scenarios currently running on the terminal device.
  • the algorithm includes performing memory defragmentation on the non-movable page dense area, the non-movable page normal area, and the movable page area; when the system load is between 20% and 40%, the terminal device performs a medium memory defragmentation algorithm, compared to the deep memory fragmentation.
  • the defragmentation algorithm reduces the defragmentation of the non-removable page-intensive areas, so as to reduce the load of the system during the defragmentation of the memory, and avoids affecting the running efficiency of the currently running application scenarios or other application scenarios on the terminal device; when the system load is 40%-60 %, when the system is busy, the system can be used for light storage defragmentation, and only the removable page area can be arranged to reduce the impact of memory defragmentation on the application scenario or other application scenarios currently running on the terminal device; At 60%, the system of the terminal device is busy at this time. Defragmentation to avoid scenarios on the impact of the terminal device is running.
  • the first preset range may be ⁇ 20%
  • the second preset range is 20%-40%
  • the third preset range is 40%-60%
  • the first preset range, the second preset may also be other values, which may be adjusted according to actual design requirements, and are not limited herein.
  • different memory defragmentation algorithms may be determined according to the system load, and the impact on the currently running application scenario or other application scenarios on the terminal device may be reduced, so that the application scenario on the terminal device can be normally operated, and the device can be collated.
  • the contiguous memory available improves the efficiency of the terminal device when performing memory switching.
  • the terminal device When the system load on the terminal device is in the third preset range, the terminal device performs a light memory defragmentation algorithm to organize the movable page area.
  • the step of performing the memory defragmentation is similar to the method of quickly defragmenting the contiguous memory in the foregoing step 704, and details are not described herein.
  • the third preset range may be that the system load is high, and the operation on the terminal device may be avoided.
  • the medium memory defragmentation algorithm is executed to perform memory defragmentation on the non-movable page common area and the movable page area.
  • the method for collating the movable page area is similar to the light memory defragmentation algorithm in the fast finishing contiguous memory in the foregoing step 704, and details are not described herein.
  • the preset unit range if the non-movable page is greater than 0 and does not exceed the dense threshold, it is considered that the unit range is a non-movable common area. For example, if there are no more than 100 non-removable pages in 1024 pages and greater than 0, then the 1024 pages may be considered to belong to the non-removable page common area.
  • the movable pages in the non-movable page common area are arranged, so that the movable page is in a continuous memory page, thereby arranging the continuous pages.
  • the movable page can be moved to the free contiguous memory, or can be moved to the contiguous memory adjacent to the non-removable page, which can be adjusted according to actual design requirements, which is not limited herein.
  • the medium memory defragmentation algorithm may be performed, and only the non-removable page common area is performed.
  • the movable page area is arranged to adapt to the system load of the medium device, improve the operating efficiency of the terminal device, and the available continuous memory can be sorted out in advance.
  • the non-removable pages are sorted to prevent the efficiency and success rate of the memory defragmentation from being increased due to the increase of the non-movable pages after the system is operated for a long time, thereby improving the efficiency and success rate of the terminal device for performing memory defragmentation.
  • the terminal device may perform a deep memory defragmentation algorithm, including performing memory defragmentation on the non-movable page dense area, the non-movable page normal area, and the movable page area.
  • the memory defragmentation of the movable page area is similar to the light memory defragmentation algorithm in the fast-organizing contiguous memory in the foregoing step 704, wherein the memory defragmentation of the non-movable page common area and the moderate memory fragmentation in the foregoing step 707 are performed.
  • the finishing algorithm is similar, and will not be described here.
  • the specificity of the memory defragmentation of the non-movable page dense area may be as shown in FIG.
  • the movable page of the non-movable page dense area may be moved to the free memory of the non-movable page dense area to increase the available contiguous memory on the terminal device.
  • the movable page of the non-movable page dense area can be moved to the free memory page spaced between the non-movable pages.
  • the movable page area, the non-movable page normal area, and the non-movable page dense area are collated at the same time, the movable page can be moved to the free memory page between the non-movable pages to sort out more free memory. page.
  • the non-movable page normal area and the non-movable page dense area are arranged to avoid the efficiency and success rate of the memory defragmentation due to the increase of the non-movable page after the system is operated for a long time. It can reduce the severity of memory fragmentation of the terminal device after the system is running for a long time, and can improve the efficiency and success rate of the terminal device for performing memory defragmentation.
  • the terminal device when the terminal device is switched from the first application scenario to the second application scenario, if the available continuous memory on the terminal device is insufficient to start or run the second application scenario, the terminal device may perform the fast. Memory finishing. For example, a light memory defragmentation algorithm is executed to make the available contiguous memory on the terminal device satisfy the contiguous memory required for the second application scenario. For example, the terminal device currently runs the first application scenario, and after predicting the second application scenario to be switched, and obtaining the target continuous memory, the terminal device needs to perform memory defragmentation due to insufficient continuous memory. The terminal device can also enable the light memory defragmentation algorithm to quickly clear out the available memory to ensure the terminal device.
  • the second application scenario can be started normally.
  • the memory defragmentation is performed to clear the available contiguous memory that is not less than the target contiguous memory, so as to ensure that the terminal device can normally switch the application scenario.
  • the terminal device may further perform memory defragmentation according to the system load, and dynamically update the memory defragmentation algorithm according to the system load of the terminal device. Adjust to further increase the available contiguous memory on the terminal device.
  • the resources of the terminal device can be reasonably utilized to reduce the impact on the running application scenario of the terminal device, and improve the efficiency and reliability of the terminal device switching application scenario.
  • the terminal device in the embodiment of the present application may be a smart phone, a tablet computer, a mobile device, a PDA (Personal Digital Assistant), a camera, or Various wearable devices, etc., are not limited herein.
  • the following takes the specific application scenario in the terminal device as an example for further explanation.
  • the terminal device is a smart phone, and a plurality of applications are installed in the smart phone, including a WeChat and a camera.
  • the user can switch from WeChat to the camera to take a photo.
  • the terminal device switches from WeChat to the camera, the camera is first turned on, then the camera preview is entered, and then the camera is taken.
  • a large amount of contiguous memory is used in the camera preview scene and the camera photographing scene.
  • the specific steps of the method for memory management provided by the present application may include:
  • the terminal device collects the number of times of switching from WeChat to other application scenarios, and obtains the number of times of switching from WeChat to the camera.
  • the specific collection mode may be to record each time from WeChat to other application scenarios. For example, the number of times to switch from WeChat to the camera is 100, and the number of times to switch from WeChat to the application market is two. Then, the contiguous memory required for each application or application scene on the smartphone to be started and run is separately collected, including the camera preview scene of the camera and the contiguous memory size required for the camera photographing scene to run.
  • the specific collection method may be: inserting a switching count variable into the memory allocation function in the smart phone, counting the continuous memory allocated for each memory, and acquiring a count each time the camera is ready to enter, enter and exit, and enter
  • the continuous memory difference collected during completion and accurate entry is the continuous memory requirement when the camera is started.
  • the difference between the exit and the completion is the overall continuous memory requirement of the camera.
  • the number of times of switching from WeChat to other applications and application scenarios can be updated at the same time, so that the camera switching information can be collected subsequently.
  • the contiguous memory required by the camera can also be updated, so that the smartphone determines the continuous memory requirement of the camera based on the historical data and the collected data.
  • the probability of currently switching from WeChat to other applications or application scenarios may be determined, wherein the probability of switching from WeChat to the camera may be determined to be 90%, at which time the smartphone can predict that the upcoming Switch to the camera scene.
  • the more samples the smartphone launches the camera the more accurate the prediction probability and the higher the prediction efficiency. For example, if you sample more than 100,000 samples, you can predict the probability of starting the camera when you enter WeChat. If there is only one sample, you can only predict when you enter the camera. For camera continuous memory requirements, only one sample is needed to predict.
  • the smartphone After the smartphone predicts that it will switch to the camera scene, it recognizes the continuous memory required for the camera to start up and run, including the camera preview and the contiguous memory required for camera taking. Then start the memory defragmentation. First calculate the current available contiguous memory on the smartphone.
  • the smartphone can quickly defragment the memory, if the current available contiguous memory on the smartphone.
  • the smartphone can calculate the current non-movable page dense area if the continuous memory required for the camera to start up and run, or the smartphone does not compare the available continuous memory with the continuous memory required for the camera to start up and run, if the current non-movable page dense area If the current memory fragmentation of the smartphone is serious, the smartphone can also perform subsequent memory defragmentation steps, first performing fast memory defragmentation.
  • Fast memory defragmentation can be performed by performing a light memory defragmentation algorithm to defragment the memory on the smartphone.
  • the system load of the smartphone can be continuously obtained, and then the memory defragmentation algorithm is dynamically adjusted according to the system load of the smartphone. For example, when the system load is less than 20%, the deep memory defragmentation algorithm is executed.
  • the mobile page dense area, the non-movable page common area, and the movable page area are collated.
  • the specific finishing algorithm is similar to the foregoing step 706-step 708 in FIG.
  • a schematic diagram of an embodiment may include:
  • the data collection module 1201 is configured to acquire a switching probability of switching from the first application scenario to the second application scenario of the second application scenario, where the first application scenario is an application scenario currently run by the terminal device. Specifically, it can be used to implement the specific steps of step 401 in the foregoing embodiment of FIG. 4;
  • the contiguous memory requirement identification module 1202 is configured to: according to the switching probability that the preset condition is met in the switching probability, and the contiguous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios Determining the target contiguous memory, which may be used to implement the specific steps of step 403 in the foregoing embodiment of FIG. 4;
  • the active memory defragmentation module 1203 is configured to switch from the first application scenario to any one of the one or more second application scenarios, if the contiguous memory available on the terminal device is not greater than the target contiguous memory. Before the application scenario, the defragmentation is performed according to the contiguous memory of the target, so that the contiguous memory available on the terminal device is greater than the target contiguous memory, which may be used to implement the specific steps of step 406 in the foregoing FIG. 4 embodiment.
  • the active memory defragmentation module 1203 is specifically configured to:
  • step 705 it can be used to implement the specific steps in step 705 and related steps in the foregoing FIG. 7 embodiment.
  • the active memory defragmentation module 1203 is specifically configured to:
  • the data collection module 1201 is specifically configured to:
  • step 502 in the foregoing embodiment of FIG. 5.
  • the contiguous memory requirement identification module 1202 is specifically configured to:
  • the target contiguous memory is determined according to the contiguous memory required by the second application scenario whose switching probability is greater than the threshold, and may be used to implement the specific steps in step 506 in the foregoing FIG. 5 embodiment.
  • the contiguous memory requirement identification module 1202 is specifically configured to:
  • the switching probability of each second application scenario and the required contiguous memory in the second application scenario with the multiple switching probabilities being greater than the threshold are weighted to obtain the
  • the target continuous memory may be specifically used to implement the specific steps in step 506 in the foregoing embodiment of FIG. 5.
  • the contiguous memory requirement identification module 1202 is specifically configured to:
  • the terminal device uses the contiguous memory required by the target application scenario as the target contiguous memory, which can be used to implement the specific steps in step 506 in the foregoing embodiment of FIG. 5.
  • the active memory defragmentation module 1203 is further configured to:
  • the terminal device defragments the memory fragment by using a fast memory defragmentation algorithm, which can be used to implement the specific steps in step 704 in the foregoing embodiment of FIG.
  • the embodiment of the present application further provides a terminal device.
  • a terminal device As shown in FIG. 13 , for the convenience of description, only parts related to the embodiment of the present invention are shown. For details that are not disclosed, refer to the method part of the embodiment of the present invention.
  • the terminal device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal device is used as a mobile phone as an example:
  • FIG. 13 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 1310 , a memory 1320 , an input unit 1330 , a display unit 1340 , a sensor 1350 , an audio circuit 1360 , a wireless fidelity (WiFi) module 1370 , and a processor 1380 .
  • RF radio frequency
  • the RF circuit 1310 can be used for receiving and transmitting signals during and after the transmission or reception of information, in particular, after receiving the downlink information of the base station, and processing it to the processor 1380; in addition, transmitting the designed uplink data to the base station.
  • RF circuit 1310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 1310 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Pack
  • the memory 1320 can be used to store software programs and modules, and the processor 1380 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 1320.
  • the memory 1320 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 1320 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1330 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 1330 may include a touch panel 1331 and other input devices 1332.
  • the touch panel 1331 also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 1331 or near the touch panel 1331. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1331 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1380 is provided and can receive commands from the processor 1380 and execute them.
  • the touch panel 1331 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1330 may further include other input devices 1313.
  • other input devices 1313 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1340 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 1340 can include a display panel 1341.
  • the display panel 1341 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1331 may cover the display panel 1341. After the touch panel 1331 detects a touch operation thereon or nearby, the touch panel 1331 transmits to the processor 1380 to determine the type of the touch event, and then the processor 1380 according to the touch event. The type provides a corresponding visual output on the display panel 1341.
  • the touch panel 1331 and the display panel 1341 are used as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1331 and the display panel 1341 may be integrated. Realize the input and output functions of the phone.
  • the handset can also include at least one type of sensor 1350, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1341 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1341 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 1360, a speaker 1361, and a microphone 1362 can provide an audio interface between the user and the handset.
  • the audio circuit 1360 can transmit the converted electrical data of the received audio data to the speaker 1361, and convert it into a sound signal output by the speaker 1361; on the other hand, the microphone 1362 converts the collected sound signal into an electrical signal, by the audio circuit 1360. After receiving, it is converted into audio data, and then processed by the audio data output processor 1380, sent to, for example, another mobile phone via the RF circuit 1310, or outputted to the memory 1320 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 1370. It provides users with wireless broadband Internet access.
  • FIG. 13 shows the WiFi module 1370, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 1380 is a control center for the handset that connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1320, and invoking data stored in the memory 1320, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 1380 may include one or more processing units; preferably, the processor 1380 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1380.
  • the processor 1380 can perform the specific steps performed by the terminal device in the foregoing FIGS. 3 through 13.
  • the handset also includes a power source 1390 (such as a battery) that supplies power to the various components.
  • a power source 1390 such as a battery
  • the power source can be logically coupled to the processor 1380 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of Figures 3 through 11 of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • Stored Programmes (AREA)

Abstract

本申请提供一种内存管理的方法以及相关设备,基于应用场景和连续内存需求预测,主动进行内存碎片整理,以满足不同应用场景对连续内存的需求,减少内存分配的等待时间,提高应用运行效率。该方法包括:终端设备获取从当前所运行的第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率;然后根据该一个或多个第二应用场景中切换概率满足预设条件的一个或多个第二应用场景所需的连续内存确定目标连续内存;若该终端设备上可用的连续内存小于该目标连续内存,则在该终端设备从第一应用场景切换到任一第二应用场景之前,该终端设备进行内存碎片整理,以使该终端设备上可用的连续内存大于该目标连续内存。

Description

一种内存管理的方法以及相关设备
本申请要求于2018年4月13日提交中国专利局、申请号为201810333058.6、申请名称为“一种内存管理的方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,特别涉及一种内存管理的方法以及相关设备。
背景技术
物理内存的碎片化,即内存页面不连续,一直是操作***面临的重要问题之一,而一般的应用在运行时使用的大部分内存都需要为连续内存。为解决物理内存碎片化的问题,现有技术通常使用内存管理算法,例如,Linux中的Buddy system(伙伴***)的碎片整理算法,来将内存中的碎片(fragmentation)整理成连续内存,以满足应用的内存需求。
现有的内存管理算法主要分为两类:同步内存碎片整理(memory defragmentation)算法和异步内存碎片整理算法。其中,同步内存碎片整理算法是在为应用分配内存的过程中,若***的可用连续内存无法满足应用需求时,触发内存碎片整理。异步内存碎片整理算法是在***的可用连续内存低于设定阈值时,触发内存碎片整理。
可以看出,现有的内存管理算法是基于特定事件被动触发内存碎片整理。比如同步内存碎片整理算法是在***无法为当前应用分配连续内存时触发碎片整理,因此需要等待***释放内存并整理出连续内存后才能完成内存分配,这样会大大增加内存分配的等待时间,影响当前应用的运行效率。而异步内存碎片整理算法是在***可用连续内存低于预设阈值时触发内存碎片整理,并且整理到阈值之上就停止,当应用需要大量连续内存时,异步碎片整理无法及时满足应用的内存需求,从而进入同步内存碎片整理,同样会导致内存分配时间长的问题,影响应用的运行效率。
发明内容
本申请提供一种内存管理的方法以及相关设备,基于应用场景和连续内存需求预测,主动进行内存碎片整理,以满足不同应用场景对连续内存的需求,减少内存分配的等待时间,提高应用运行效率。
有鉴于此,本申请第一方面提供一种内存管理的方法,可以包括:
终端设备获取从当前运行的第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率,其中,该多个即为两个或两个以上;然后该终端设备根据该满足预设条件的切换概率以及该一个或多个第二应用场景中切换概率满足预设条件的每个第二应用场景所需的连续内存确定目标连续内存;若该终端设备上可用的连续内存小于该目标连续内存,则在该终端设备从第一应用场景切换到该一个或多个第二应用场景中的任一第二应用场景之前,该终端设备根据该目标连续内存进行内存碎片整理,以使该终端设备上可用的连续内存大于该目标连续内存。
需要说明的是,本申请实施方式对终端设备上可用的连续内存等于目标连续内存的情况,可以进行内存碎片整理,也可以不进行内存碎片整理,具体根据实际设计需求调整, 此处不作限定。
在本申请实施例中,对终端设备即将切换的应用场景进行预测,获取从第一应用场景切换到每个第二应用场景的切换概率,并根据该满足预设条件的切换概率以及对应的第二应用场景所需连续内存确定目标连续内存,之后终端设备主动进行内存碎片整理,以使该终端设备上的可用连续内存大于该目标连续内存,以保障终端设备在切换应用场景时有足够的连续内存分配,无需等待终端设备切换应用场景时才进行内存碎片整理,提高终端设备切换应用场景时的效率。
结合本申请第一方面,在本申请第一方面的第一种实施方式中,该终端设备进行内存碎片整理,可以包括:
终端设备获取***负载,并根据***负载所处的范围确定内存碎片整理算法;然后根据该内存碎片整理算法对该终端设备的内存进行内存碎片整理,以提高终端设备上可用的连续内存。
在本申请实施例中,可以通过终端设备的***负载所处的范围确定内存碎片整理算法,还可以根据终端设备的***负载动态调整内存碎片整理算法,进而可以合理利用终端设备的内存资源,降低内存碎片整理对终端设备运行应用场景的影响,提高终端设备切换应用场景的效率。
结合本申请第一方面的第一种实施方式,在本申请第一方面的第二种实施方式中,该终端设备根据***负载所处的范围确定内存碎片整理算法,包括:
若该***负载处于第一预设范围,则该终端设备确定该内存碎片整理算法为深度内存碎片整理算法;若该***负载处于第二预设范围,则该终端设备确定该内存碎片整理算法为中度内存碎片整理算法;或若该***负载处于第三预设范围,则该终端设备确定该内存碎片整理算法为轻度内存碎片整理算法。
在本申请实施例中,根据终端设备的***负载确定合适的内存碎片整理算法,动态调整内存碎片整理算法,包括深度内存碎片整理算法、中度内存碎片整理算法以及轻度内存碎片整理算法,在进行内存碎片整理的同时,降低对终端设备上正在运行的应用场景的影响,清理出终端设备的可用连续内存,提高终端设备切换应用场景的效率。例如,终端设备负载较高的情况下,可以通过轻度内存碎片整理算法进行内存碎片整理,以避免影响终端设备上正在运行的应用场景;或终端设备负载较低的情况下,可以通过深度内存碎片整理算法进行内存碎片整理,以合理利用终端设备的资源,同时不影响终端设备上其他进程或应用场景的运行。
结合本申请第一方面、本申请第一方面的第一种实施方式或本申请第一方面的第二种实施方式,在本申请第一方面的第三种实施方式中,获取终端设备从第一应用场景切换到一个或多个第二应用场景的切换概率,可以包括:
获取终端设备从该第一应用场景切换到该一个或多个第二应用场景中每个第二应用场景的历史切换次数;该终端设备根据该历史切换次数确定从该第一应用场景切换到该一个或多个第二应用场景中每个第二应用场景的切换概率。
具体地,在本申请实施方式中,终端设备首先获取从第一应用场景切换到一个或多个 第二应用场景的历史切换次数,然后根据该历史次数计算切换到每个第二应用场景的切换概率。该切换概率可以用于预测切换应用场景所需的连续内存,并根据该连续内存主动进行内存碎片整理,使终端设备上的可用连续内存满足切换应用场景所需的连续内存,提高终端设备切换应用场景的效率。
结合本申请第一方面、本申请第一方面的第一种实施方式至本申请第一方面的第三种实施方式中任一种实施方式,在本申请第一方面的第四种实施方式中,根据该切换概率中满足预设条件的切换概率,以及该一个或多个第二应用场景中切换概率满足该预设条件的每个第二应用场景所需的连续内存确定目标连续内存,可以包括:
该终端设备从该一个或多个第二应用场景中确定该切换概率大于阈值的第二应用场景;该终端设备根据该切换概率大于阈值的第二应用场景所需的连续内存确定该目标连续内存。
在本申请实施方式中,滤除切换概率不大于阈值的第二应用场景,然后根据切换概率大于阈值的第二应用场景所需的连续内存确定目标连续内存,并使终端设备整理出大于目标连续内存的可用连续内存,以使终端设备上可用的连续内存更能保障即将切换的第二应用场景所需的连续内存。
结合本申请第四种实施方式,在本申请第五种实施方式中,若从该一个或多个第二场景中确定切换概率大于所述阈值的第二应用场景有多个,该终端设备根据一个或多个该切换概率大于预置的第二应用场景所需的连续内存确定该目标连续内存,可以包括:
该终端设备对该多个切换概率大于阈值的第二应用场景中每个第二应用场景的切换概率以及所需的连续内存进行加权运算,以得到该目标连续内存。其中,加权运算中的权重可以与该多个切换概率大于阈值的第二应用场景中每个第二应用场景的切换概率具有对应关系,例如,切换概率越高,所占的权重越大。
在本申请实施方式中,可以通过对切换概率大于阈值的第二应用场景中每个第二应用场景的切换概率以及所需的连续内存进行加权运算,以使得到目标连续内存更加接近于终端设备即将切换的第二应用场景所需的连续内存,进而保障终端设备切换到第二应用场景所需的连续内存。
结合本申请第四种实施方式,在本申请第六种实施方式中,该终端设备根据该一个或多个切换概率大于阈值的第二应用场景所需的连续内存确定该目标连续内存,包括:
该终端设备从该一个或多个切换概率大于阈值的第二应用场景中确定所需连续内存最大的目标应用场景;并将该目标应用场景所需的连续内存作为该目标连续内存。
在本申请实施方式中,可以将一个或多个切换概率大于阈值的第二应用场景中所需的最大连续内存作为目标连续内存,以此满足该切换概率大于阈值的第二应用场景中每个第二应用场景所需的连续内存,提高终端设备切换应用场景的效率。
结合本申请第一方面、本申请第一方面的第一种实施方式至本申请第一方面的第六种实施方式中任一种实施方式,在本申请第一方面的第七种实施方式中,该方法还可以包括:
当该终端设备从该第一应用场景切换到该一个或多个第二应用场景中的其中一个第二应用场景,且该终端设备上的可用连续内存不满足该其中一个第二应用场景所需的连续内 存时,该终端设备通过轻度内存碎片整理算法对终端设备的内存碎片进行整理。
在本申请实施方式中,终端设备可能出现意外的连续内存消耗,或终端设备上的可用连续内存低于目标连续内存时,终端设备在切换至第二应用场景时,也可以进行轻度内存碎片整理,快速整理出可用连续内存,以保障终端设备可以正常切换应用场景。
本申请实施例第二方面提供了终端设备,该终端设备具有实现上述第一方面内存管理的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例第三方面提供一种终端设备,可以包括:
处理器、存储器、总线以及输入输出接口,该处理器、该存储器与该输入输出接口通过该总线连接;该存储器,用于存储程序代码;该处理器调用该存储器中的程序代码时执行本申请第一方面或第一方面任一实施方式提供的终端设备执行的步骤。
本申请实施例第四方面提供一种存储介质,需要说明的是,本发的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产口的形式体现出来,该计算机软件产品存储在一个存储介质中,用于储存为上述设备所用的计算机软件指令,其包含用于执行上述第一方面至第二方面中任一方面为终端设备所设计的程序。
该存储介质包括:U盘、移动硬盘、只读存储器(英文缩写ROM,英文全称:Read-Only Memory)、随机存取存储器(英文缩写:RAM,英文全称:Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例第五方面提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如本申请第一方面或第二方面任一可选实施方式中所述的方法。
在本申请实施例中,若终端设备当前正在运行第一应用场景,此时终端设备可以获取从第一应用场景切换到一个或多个第二应用场景中每个第二场景的切换概率,并根据该一个或多个第二应用场景中切换概率满足预设条件的每个第二场景的切换概率以及该每个第二应用场景启动以及运行所需的连续内存确定目标连续内存,然后在终端设备从第一应用场景切换到任意一个第二应用场景之前,终端设备主动进行内存碎片整理,使终端设备上的可用连续内存的长度大于该目标连续内存的长度,以保障即将切换的第二应用场景所需的连续内存,提高终端设备切换以及运行第二应用场景的效率。
附图说明
图1为本申请实施例具体应用的场景示意图;
图2为本申请实施例中内存页面示意图;
图3为本申请实施例中内存管理的方法的框架图;
图4为本申请实施例中内存管理的方法的一种流程示意图;
图5为本申请实施例中内存管理的方法的另一种流程示意图;
图6为本申请实施例中伙伴算法的一种示意图;
图7为本申请实施例中内存管理的方法的另一种流程示意图;
图8为本申请实施例中轻度内存碎片整理示意图;
图9为本申请实施例中中度内存碎片整理示意图;
图10为本申请实施例中深度内存碎片整理示意图;
图11为本申请实施例中内存管理的方法的具体场景示意图;
图12为本申请实施例中终端设备的一种实施例示意图;
图13为本申请实施例中终端设备的另一种实施例示意图。
具体实施方式
本申请提供一种内存管理的方法以及相关设备,基于应用场景和连续内存需求预测,主动进行内存碎片整理(memory defragmentation),以满足不同应用场景对连续内存的需求,减少内存分配的等待时间,提高应用运行效率。
本申请实施例提供的内存管理设备的方法可以应用于终端设备,该终端设备可以是手机、平板电脑、车载移动装置、PDA(personal digital assistant,个人数字助理)、相机或可穿戴设备等。当然,在以下实施例中,对该终端设备的具体形式不作任何限制。其中,终端设备可以搭载的***可以包括
Figure PCTCN2019082098-appb-000001
或者其它操作***等,本申请实施例对此不作任何限制。
以搭载
Figure PCTCN2019082098-appb-000002
操作***的终端设备100为例,如图1所示,终端设备100从逻辑上可划分为硬件层21、操作***161,以及应用层31。硬件层21包括应用处理器101、微控制器单元103、调制调解器107、Wi-Fi模块111、传感器114、定位模块150等硬件资源。应用层31包括一个或多个应用程序,比如应用程序163,应用程序163可以为社交类应用、电子商务类应用、浏览器等任意类型的应用程序。操作***161作为硬件层21和应用层31之间的软件中间件,是管理和控制硬件与软件资源的计算机程序。
在一个实施例中,操作***161包括内核23,硬件抽象层(hardware abstraction layer,HAL)25、库和运行时(libraries and runtime)27以及框架(framework)29。其中,内核23用于提供底层***组件和服务,例如:电源管理、内存管理、线程管理、硬件驱动程序等;硬件驱动程序包括Wi-Fi驱动、传感器驱动、定位模块驱动等。硬件抽象层25是对内核驱动程序的封装,向框架29提供接口,屏蔽低层的实现细节。硬件抽象层25运行在用户空间,而内核驱动程序运行在内核空间。
库和运行时27也叫做运行时库,它为可执行程序在运行时提供所需要的库文件和执行环境。库与运行时27包括安卓运行时(Android Runtime,ART)271以及库273等。ART 271是能够把应用程序的字节码转换为机器码的虚拟机或虚拟机实例。库273是为可执行程序在运行时提供支持的程序库,包括浏览器引擎(比如webkit)、脚本执行引擎(比如JavaScript引擎)、图形处理引擎等。
框架27用于为应用层31中的应用程序提供各种基础的公共组件和服务,比如窗口管理、位置管理等等。框架27可以包括电话管理器291,资源管理器293,位置管理器295等。
以上描述的操作***161的各个组件的功能均可以由应用处理器101执行存储器105中存储的程序来实现。
所属领域的技术人员可以理解终端100可包括比图1所示的更少或更多的部件,图1 所示的该终端设备仅包括与本申请实施例所公开的多个实现方式更加相关的部件。
由上图1可知,终端设备上可以安装多个应用程序(简称为“应用”),且可以在多个应用之间进行切换,或在一个应用的多个场景,比如应用的不同功能、或界面等等,之间切换。当终端设备进行应用场景切换,包括在多个应用之间进行切换,或在一个应用内的多个场景中切换时,终端设备内存涉及到多个模块的运行,包括内存分配,以及内存的读写等。各个应用场景启动以及运行需要连续内存。
其中,以一个具体应用场景切换为例,该终端设备上安装了浏览器、购物软件、游戏软件等应用。终端设备可以在多个应用场景之间切换,包括在该终端设备上安装的多个应用之间,或在应用内的各个场景,比如功能或用户界面之间进行切换。例如,从浏览器切换到购物软件,或从购物软件切换到拍照软件,又或是从拍照软件内的拍照场景切换到照片预览场景等。在终端设备进行应用场景的切换时,终端设备内部也涉及到内存的变化,每个应用场景的启动以及运行都需要内存。内存也可以包括多种划分方式,具体地,以Linux***为例,物理内存按固定页面划分内存,可以划分为多个内存页面,一个内存页面的大小可以为4kb。而一般应用场景运行时都需要连续内存,即连续的内存页面。内存页面可以分为可移动内存页面、不可移动内存页面以及可回收页面。可移动内存页面即可以随意移动,可以将该可移动内存页面所存储的数据移动到其他内存页面上。一般用户空间的应用程序所占用的页面属于可移动页面,应用程序与内存页面可通过页面映射。因此,只需要更新页表项,把原内存页面上所存储的数据复制到目标内存页面即可。一个内存页面也可能被多个进程共享,对应多个页表项。而物理内存在长期的内存分配以及释放后,形成了部分不可移动页面,即在内存中位置固定,无法移动到其他地方。核心内核所分配的页面大部分都属于不可移动页面,随着***的运行时长越长,不可移动页面也将增加。可回收页面不能直接移动,但可以回收,包括应用程序从另外的内存页面重建了数据,则原内存页面上的数据就可以被回收。一般可回收页面可以被***预置的内存回收进程回收。例如,映射文件的数据所占用的内存可以属于可回收页面,Linux***中的kswapd进程可以按照预置的规则,周期性的对可回收页面进行回收。
当终端设备在进行应用场景切换时,将使用到连续内存。而随着***的使用,物理内存将随着***的运行被占用。例如,如图2所示,一段内存中可以包括多种内存页面,包括空闲内存页面,可回收内存页面,不可移动内存页面以及可移动内存页面等。因此,为保障终端设备上应用场景的正常运行,需要对内存进行内存碎片整理,以得到连续的空闲内存。其中,内存碎片整理是一个减少内存碎片数量的过程,内存碎片整理主要包括对可移动内存页面进行移动,或对可回收页面进行回收或移除等,以获得物理地址连续的空闲内存。
因此,为确保终端设备上有足够可用的连续内存,以前述图中所述的终端设备为例,本申请实施例对操作***部分作出了改进。具体可以涉及到操作***中涉及到内存部分以及应用运行所需的部分,同时还涉及到终端设备内部的应用层部分,具体涉及到应用程序的切换过程。此外,图1中所述的各个功能模块,仅为部分模块,实际应用中终端设备可以包括多个与内存相关,以及与应用运行以及应用切换相关的模块,此处并不作限定。本 申请实施例中内存管理的方法提供的具体的改进,可以包括在进行应用场景切换前,对即将切换的应用场景以及所需的连续内存进行预测,主动进行内存碎片整理,以整理出足够的连续内存。进而保障终端设备在切换应用场景时,可以为应用场景分配足够的连续内存。具体地,本申请实施例中内存管理的方法的框架如图3所示。
终端设备在应用启动或进行应用场景切换之前,可以通过计算或学习的方式预测即将切换的应用场景所需的目标连续内存。然后终端设备根据该目标连续内存进行内存碎片整理,整理出可以使用的连续内存,使终端设备的可用连续内存大于该目标连续内存。其中,本申请各个实施例中的“应用场景”可以是终端设备上的应用,也可以是终端设备上某一应用内的场景,比如应用的功能、用户界面等。即本申请中应用场景的切换可以是终端设备上应用之间的切换,也可以是终端设备上某一应用内部的场景切换,具体此处不作限定。在终端设备切换应用场景之前,为即将切换的应用场景分配连续内存,以使终端设备满足该应用场景所需的连续内存,使该应用场景可以正常运行。其中,本申请提供的内存管理的方法为保障即将切换的应用场景的连续内存需求,终端设备可以在终端设备进行应用场景切换前预测即将切换的应用场景所需的目标连续内存,然后进行内存碎片整理。因此,在本申请实施例中,可以在终端设备进行应用场景切换之前整理出即将切换的应用场景所需的连续内存,以保障即将切换的应用场景运行时所需的连续内存,提高应用场景切换的效率。
进一步地,本申请实施例中内存管理的方法的流程如图4所示,本申请内存管理的方法的流程示意图,包括:
401、获取从第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率。
终端设备上可以包括一个或多个应用,该多个即指两个或两个以上。每个应用中可以包括多个应用场景。终端设备在运行时,可以在该多个应用场景中切换。若终端设备当前正在运行第一应用场景,终端设备可以获取从第一应用场景切换到该一个或多个第二应用场景中每个第二应用场景的切换概率,获取该切换概率的方式可以是根据该终端设备上进行应用场景切换的次数或深度学习等方式。例如,终端设备当前正在运行浏览器,即第一应用场景,则终端设备可以获取从该浏览器切换到相机、游戏、购物软件等一个或多个第二应用场景中每个第二应用场景的切换概率,如从该浏览器切换到相机的概率为15%,切换到购物软件的概率为23%,切换到游戏的概率为3%等。又例如,当在应用内部的各个场景中切换时,终端设备上正在运行微信中的聊天对话场景,即将切换到微信红包的概率为30%,切换到朋友圈场景的概率为50%等。
在本申请实施例中,该第一应用场景与第二应用场景可以是不同的应用,也可以是不同的或相同的应用中的不同应用场景,可根据实际设计需求调整,具体此处不作限定。
402、获取一个或多个第二应用场景中每个第二应用场景所需的连续内存。
终端设备除了需要获取从第一应用场景切换到该一个或多个第二应用场景中每个应用场景的切换概率,还需要获取该一个或多个第二应用场景中每个第二应用场景所需的连续内存。例如,运行游戏需要60kb连续内存,运行相机需要500kb连续内存等。还可以是, 获取该切换概率大于预设值的第二应用场景中每个第二应用场景所需的连续内存。
应理解,若获取一个或多个第二应用场景中每个第二应用场景所需的连续内存,则在本申请实施例中,可以先执行步骤401,也可以先执行步骤402,具体此处不作限定。
403、根据切换概率以及每个第二应用场景所需的连续内存确定目标连续内存。
在确定切换到第二应用场景的切换概率以后,确定该切换概率中确定满足预设条件的切换概率,以及一个或多个第二应用场景中切换概率满足预设条件的每个第二应用场景所需的连续内存。终端设备可以根据该切换概率以及切换概率满足预设条件的每个第二应用场景所需的连续内存计算得到目标连续内存。终端设备可在终端设备切换到任意一个第二应用场景之前,进行内存碎片整理,使终端设备上可用的连续内存大于该目标连续内存,进而使终端设备在切换应用场景时,可以为应用场景分配足够的连续内存,保障终端设备从第一应用场景切换到其中一个第二应用场景所需的连续内存。
具体计算目标连续内存的方式可以是,首先确定切换概率大于阈值的第二应用场景,若存在至少两个切换概率大于阈值的第二应用场景,对该至少两个切换概率大于阈值的第二应用场景中的每个第二应用场景的切换概率以及所需连续内存进行加权运算,得到该目标连续内存;也可以是,终端设备从该切换概率大于阈值的第二应用场景中确定需求的最大连续内存,然后将该最大连续内存作为目标连续内存等,具体可根据实际设计需求调整,此处不作限定。
404、判断终端设备上可用的连续内存是否大于目标连续内存。
在终端设备确定目标连续内存后,终端设备判断可用的连续内存是否大于目标连续内存,即判断终端设备上可用的连续内存的大小是否大于目标连续内存的大小。若可用的连续内存大于目标连续内存,则执行步骤405,若可用的连续内存不大于目标连续内存,则执行步骤406。其中,该可用的连续内存为终端设备上可以分配给第二应用场景的连续内存。
405、执行其他步骤。
若终端设备上可用的连续内存大于目标连续内存,则该终端设备上可用的连续内存可以保障即将从第一应用场景切换到其中一个第二应用场景所需的连续内存。此时终端设备可以进行内存碎片整理,也可以不进行内存碎片整理,具体可根据实际设计需求调整,此处不作限定。
406、进行内存碎片整理。
若终端设备上可用的连续内存小于或等于目标连续内存,为保障从第一应用场景切换到其中一个第二应用场景所需的连续内存,在终端设备从第一应用场景切换到第二应用场景之前,终端设备需要进行内存碎片整理,使终端设备上可用的连续内存不小于该目标连续内存,以保障终端设备在切换到第二应用场景时,有足够的连续内存可以分配。内存碎片整理的具体步骤可以是,对终端设备上的内存页面进行整理,对可移动页面进行移动,或可回收页面进行回收等,以整理出空闲的连续内存,该空闲的连续内存可以在终端设备进行应用场景切换的时候为切换的应用场景进行分配。
需要说明的是,在本申请实施例中,若终端设备上可用的连续内存等于目标连续内存, 除了可以进行内存碎片整理,也可以不进行内存碎片整理,具体可根据实际设计需求调整,此处不作限定。
在本申请实施例中,首先确定终端设备从第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率,然后根据该切换概率以及需求内存计算得到目标连续内存。之后根据该目标连续内存进行内存碎片整理,使终端设备上可用的连续内存不小于目标连续内存,以保障终端设备从第一应用场景切换到该一个或多个第二应用场景中的其中一个第二应用场景时,有足够的可用的连续内存。提高从第一应用场景切换到该一个或多个第二应用场景中的其中一个第二应用场景的效率。
前述对本申请实施例中内存管理方法的流程进行了说明,下面对本申请实施例中内存管理方法的流程进行更进一步地阐述。首先对确定目标连续内存的具体步骤进行详细说明,请参阅图5,本申请实施例中内管管理的方法的另一种实施例示意图,包括:
501、第一应用场景启动。
该第一应用场景为终端设备当前正在运行的应用场景,当第一应用场景启动并正常运行后,终端设备接下来可以对即将切换到的应用场景进行预测,并提前整理出足够的连续内存,以保障切换的应用场景可以使用足够的连续内存,提高终端切换应用场景的效率。
502、切换应用场景数据采集。
终端设备可以采集应用场景切换的数据,例如,当前以前的24小时内,终端设备从应用场景A切换到应用场景B的次数,或应用场景A切换到应用场景C的次数等。
具体的实施可以方式是,在终端设备内的场景启动函数中***切换计数变量,对每次切换应用场景进行计数,例如,从微信的聊天场景切换到朋友圈场景等。
503、确定应用场景关联关系。
在对应用场景的切换次数进行采集后,可以通过应用场景之间的切换次数确定各个应用场景的关联关系,可以生成应用场景关联矩阵,可根据该应用场景关联矩阵确定即将切换的应用场景,即切换到第二应用场景的切换概率。切换概率的计算方式例如,从应用场景A切换到应用场景B为50次,从应用场景A切换到应用场景C为30次,从应用场景A切换到应用场景D为20次,那么,若当前运行应用场景A,即第一应用场景,切换到应用场景B的切换概率为50%,切换到应用场景C的切换概率为30%,切换到应用场景D的切换概率为20%。该应用场景B、应用场景C以及应用场景D即前述图4中的一个或多个第二应用场景。
例如,在24小时以前,从应用场景A切换到应用场景B的次数为500次,从应用场景A切换到应用场景C的次数为100次。在实际应用中,若终端设备当前正在运行应用场景A,那么,从应用场景A切换到应用场景B的概率大于从应用场景A切换到应用场景C的概率。应用场景关联矩阵可以如下表1所示:
  应用场景A 应用场景B 应用场景C 应用场景D 应用场景E 应用场景F
应用场景A   100 10 20 0 1
应用场景B 5   20 30 50 0
应用场景C 80 20   0 2 13
应用场景D 10 6 2   22 6
应用场景E 0 20 0 0   0
应用场景F 0 0 30 0 0  
表1
其中,该应用场景关联矩阵用于表示中的设备从某个应用场景切换到另一个应用场景的次数。例如,表1第一行中,从应用场景A切换到应用场景B的次数未为100次,切换到应用场景C的次数为10次,切换到应用场景D的次数为20次,切换到应用场景E的次数为0次,切换到应用场景F的次数为1次,可以以此推算出从应用场景A切换到其他每个应用场景的概率。终端设备从应用场景A切换到应用场景B的概率大于切换到应用场景C、应用场景D、应用场景E以及应用场景F。且在进行切换应用场景数据采集时,每切换一次应用场景,都可以对该应用场景关联矩阵进行更新,以便实时对终端设备上的应用切换进行学***均值进行更新,也可以通过对原始记录与更新数据通过加权运算进行更新,具体此处不作限定。
此外,在确定应用场景的关联关系时,每进行一次切换,则可以对该关联关系进行更新,以使终端设备可以根据更多历史切换数据预测即将切换的应用场景。历史切换数据越多,终端设备进行切换场景预测的准确性越高,因此终端设备可以通过更新应用场景关联关系提高预测的准确率,从而保障终端设备上清理出来的可用连续内存满足进行应用场景切换时所需的连续内存。
504、应用场景连续内存采集。
除了需要采集应用场景的切换次数,确定应用场景的切换概率外,还需要对每个应用场景运行所需的连续内存进行采集,即每个应用场景启动过程中的连续内存需求和应用场景运行时的连续内存需求,根据每个应用场景的连续内存需求可以识别出接下来的应用场景所需的连续内存需求。
具体的实施可以方式是,在终端设备内的内存分配函数中***切换计数变量,对每次内存分配的连续内存进行计数,每次分别在应用场景准备进入、进入完成和退出时获取一次计数。进入完成和准确进入时采集的连续内存差值即为该应用场景启动时的连续内存需求,退出和进入完成的差值即为该应用场景整体连续内存需求。
此外,除了可以采集每个应用场景需求的连续内存,还可以采集从当前应用场景进行切换时切换概率大于阈值的应用场景需求的连续内存,例如,从当前应用场景进行切换时,切换概率大于10%的应用场景有10个,则可以仅采集此10个应用场景需求的连续内存,具体的采集情况可根据设计需求调整,具体此处不作限定。
应理解,若在进行应用场景连续内存采集时无需使用步骤502采集到的应用场景切换 数据,则本申请对步骤502与步骤504的执行顺序不作限定,可以先执行步骤502,也可以先执行步骤504,具体可根据实际设计需求调整,具体此处不作限定。
在本申请实施例中,也可以对每个应用场景所需的连续内存进行记录,每次分配连续内存后,对每个应用场景所需的连续内存进行更新。可以是当前切换分配的连续内存与历史切换所分配的连续内存进行加权运算,也可以是用当前切换分配的连续内存替代历史切换所分配的连续内存记录,具体可以根据实际场景调整,此处不作限定。
505、确定应用场景连续内存需求。
在对应用场景的连续内存进行采集后,可以根据采集到的数据确定每个第二应用场景的连续内存需求,例如,可以根据每个第二应用场景启动过程中的连续内存需求和应用场景运行时的连续内存需求,确定从切换该第二应用场景到该应用场景运行时所需的连续内存需求。
具体地,在Linux***中,通过伙伴(buddy)算法对内存碎片进行管理,***内核在每个zone去管理者可用的内存页面,按2的幂级大小排列成链表队列,存放在free_area数据中。下面以一个具体的实施例进行说明,请参阅图6、本申请实施例中伙伴算法的示意图。其中,***内存中有16个内存页面,包括内存页面0至内存页面15,即图6中的pages行中的0-15。该16个内存页面按2的幂级大小排列成链表队列。因仅有16个页面,因此仅需4个级别(order)就能确定该16个内存页面的位图,即图6中的order0至order3。高阶的连续内存可以通过低阶连续内存快速整理出来,低阶连续内存可以通过高阶连续内存进行快速分配。因此,在确定应用场景的连续内存需求时,可以通过buddy算法进行连续内存分配。具体的格式如表2所示:
  启动 最大
应用场景A 100,order:2 500,order:2
应用场景B 200,order:4 500,order:4
应用场景C 100,order:8 1000,order:8
应用场景D 0 0
应用场景E 10,order:2 10,order:2
应用场景F 100,order:2 500,order:2
表2
其中,启动应用场景A时,需要100个order2的内存页面,在应用场景A正常运行时,需要500个order2的内存页面;启动应用场景B时,需要200个order4的内存页面,在应用场景B正常运行时,需要500个order4的页面;启动应用场景C时,需要100个order8的内存页面,在应用场景C正常运行时,需要1000个order8的内存页面;启动应用场景D时,需要0个内存页面,在应用场景D正常运行时,需要0个内存页面;启动应用场景E时,需要10个order2的内存页面,在应用场景E正常运行时,需要10个order2的内存页面;启动应用场景F时,需要100个order2的内存页面,在应用场景F正常运行时,需要500个order2的内存页面,其他的应用场景以此类推。
506、预测目标连续内存。
在确定每个第二应用场景的切换概率以及每个第二应用场景所需的连续内存后,可以 对即将切换的第二应用场景进行预测,同时预测所需的目标连续内存。其中可以通过应用场景关联矩阵确定从当前应用切换到其他应用场景的切换概率,可以通过设置一个阈值,将低于阈值的应用场景过滤掉,即可以将发送概率较低的应用场景过滤掉。例如,若应用场景A的切换概率低于10%,则过滤掉该应用场景A。
确定目标连续内存的具体步骤可以是,首先滤除一个或多个第二应用场景中切换概率不大于阈值的第二应用场景。然后对切换概率大于阈值的第二应用场景的切换概率以及所需的连续内存进行加权运算,得到目标连续内存。具体地,该加权运算中的权重可以与每个第二应用场景所对应的切换概率具有对应关系。例如,切换概率较大的应用场景所占的权重也可以越大,即得到的目标连续内存更偏向于切换概率较大的第二应用场景所需的连续内存;还可以是将该切换概率大于阈值的第二应用场景中所需的最大的连续内存作为该目标连续内存,或通过其他算法得到目标连续内存,具体可根据实际设备需求进行调整,此处不作限定。
507、启动内存碎片整理。
在确定目标连续内存后,若终端设备上可用的连续内存不大于目标连续内存,即可主动进行内存碎片整理,以使终端设备上在切换到第二应用场景之前,终端设备上可用的连续内存大于目标连续内存。其中,具体的碎片整理方法在如下图7的实施例中详细说明。
在本申请实施例中,通过***切换到第二应用场景所需的目标连续内存,并提前进行内存碎片整理,以使终端设备上可用的连续内存大于目标连续内存,因此终端设备在从第一应用场景切换到第二应用场景时,可以使用足够的连续内存启动以及运行第二应用场景,从而减少终端设备切换到第二应用场景的等待时间,进而可以提高终端设备切换应用场景的效率。
前述着重对本申请实施例中内存管理方法中确定目标连续内存的具体步骤进行了说明,在本申请提供的内存管理方法中,除了在应用场景切换前***目标连续内存,并进行内存碎片整理外,为进一步提高内存碎片整理的效率,且不影响终端设备上正在运行的应用或进程等,本申请实施例还对内存整理的具体算法作出了改进,可通过动态调节进行内存碎片整理。下面对本申请实施例中内存管理方法中进行内存碎片整理的步骤进行详细阐述,请参阅图7,本申请实施例中内存管理的方法的另一个实施例示意图,可以包括:
701、启动内存碎片整理。
在终端设备通过预测的方式确定目标连续内存后,在切换至某一第二应用场景之前,终端设备可以启动内存碎片整理,以整理出可用的连续内存。具体在如下步骤702-步骤708中详细描述。
702、计算当前可用连续内存,若满足目标连续内存,则执行步骤703,若不满足目标连续内存,则执行步骤704。
在确定目标连续内存后,终端设备可以计算当前可用连续内存,即终端设备当前可分配给第二应用场景的连续内存。具体地,在Linux***中,可以从buddy***中获取当前终端设备上所有的可用连续内存,并判断可用连续内存是否满足目标连续内存。
若终端设备上的可用连续内存不满足目标连续内存,则执行步骤704,即进行快速整理 内存,以使终端设备上的可用连续内存小于目标连续内存。若终端设备上的可用连续内存不小于目标连续内存,则终端设备可以进行不可移动页面密集区计算,即执行步骤703。
703、计算不可移动页面密集区。
在终端设备启动内存碎片整理时,或终端设备上的可用连续内存满足目标连续内存时,可以对不可移动页面密集区进行计算。其中,在预置的单位范围内,不可移动页面超过密集阈值,则认为该单位范围内为不可移动密集区。例如,若在1024个页面中,不可移动页面超过100个,则可以认为该1024个页面属于不可移动页面密集区。当不可以移动页面密集区大于密集阈值时,则可以执行步骤704。而当不可移动页面密集区不大于密集阈值时,可以停止进行内存碎片整理。
需要说明的是,在本申请实施例中,可以对不可移动页面密集区进行计算,也可以不对不可移动页面密集区进行计算,即步骤703可以为可选步骤。在实际应用中,当终端设备未对目标连续内存进行计算时,也可以直接计算不可移动页面密集区。若不可移动页面密集区大于预设值,也可以进行快速整理连续内存,即通过轻度内存碎片整理算法进行内存碎片整理,轻度内存碎片整理算法在步骤704中进行详细说明。若不可移动页面密集区不大于预设值,可以继续进行通过轻度内存碎片整理算法快速进行内存碎片整理,也可以不进行内存碎片整理,具体可根据实际设计需求调整,此处不作限定。
具体地,在终端设备进行内存碎片整理时,若遇到不可移动页面直接跳过,则当***长时间运行后,不可以移动页面增加,内存碎片化程度大大提升,导致整理出大块连续内存的成功率减低,导致内存整理以及内存分配的速度下降,将降低终端设备的运行效率。因此,在本申请实施例中,对不可移动页面密集区进行计算,并在后续进行内存碎片整理,包括对包括了不可移动页面的区域进行整理,具体在后续步骤707以及步骤708中详细描述。因此,可以对包括不可移动页面区域进行整理,避免***长时间运行后,因不可移动页面增加而降低内存碎片整理的效率以及成功率,可以提高终端设备进行内存碎片整理的效率以及成功率。
704、快速整理连续内存。
在终端设备上的可用连续内存不满足目标连续内存,或终端设备上不可移动密集区大于预设值时,终端设备快速整理连续内存。包括通过轻度内存碎片整理算法进行内存碎片整理,即可以对可移动页面区进行内存碎片整理。具体地,通过轻度内存碎片整理算法整理前的内存页面以及整理后的内存页面如图8所示,其中,可移动页面区为在预置单位范围内的内存页面不包括不可移动页面。例如,若在1024个页面中,不包括不可移动页面,则可以认为该1024个页面属于可移动页面区。通过轻度内存算法进行内存碎片整理,即对可移动页面区进行整理,将可移动页面区中的可移动页面都移动至一段连续内存上,以使空闲页面形成连续内存。例如,可移动页面区地址为0001-0100的内存页面中包括了不连续的20个可移动页面,那么,可以将该20个可移动页面统一移动至0001-0020上,因此地址为0020之后的内存页面都为空闲页面,以此整理出空闲的连续内存。因此,通过轻度内存碎片整理算法可以在终端设备上快速整理出空闲的连续内存,以保障终端设备进行应用场景切换时所需的连续内存。
在实际应用中,可以通过轻度内存碎片整理算法快速整理出可用的连续内存,以保障在终端设备切换应用时有更多的可用连续内存可以分配。例如,若终端设备当前运行应用场景A,那么,若终端设备上当前的可用连续内存不满足应用场景A,或终端设备上的不可移动页面密集区大于预设值,则终端设备可以进行快速整理连续内存,对可移动页面区进行快速整理,快速整理出可用的连续内存。以此避免终端设备突然切换应用场景而连续内存不足,进而提高切换应用场景的效率以及可靠性。若不可以移动页面密集区大于预设值,则终端设备上的不可移动页面增加,可通过快速整理连续内存,使终端设备上有更多的连续内存可以为应用场景分配。
705、获取***负载。
在进行快速整理连续内存后,可以增加终端设备上可用的连续内存,可以防止终端设备突然切换应用场景时可用连续内存不够的情况。若终端设备上的可用连续内存仍然不满足目标连续内存,或为更进一步地提高终端设备上的可用连续内存,可以进一步对内存碎片进行整理。具体可以通过获取终端设备的***负载,根据终端设备的***负载所处的范围对内存碎片整理算法进行动态调整,以合理利用终端设备的资源,且降低对终端设备上正在运行的应用场景的影响。该***负载可以用于表示终端设备中的***繁忙程度,可以是端设备上单位时间内正在运行或等待运行的进程所占的系数。例如,***负载可以是单位时间内,终端设备的运行列队中的进程数量的平均值。具体地,在Linux***中,可以通过使用预置的查询指令,如uptime、top指令等,以查询终端设备的***负载。
终端设备的***负载通常可以通过终端设备的中中央处理器(central processing unit,cpu)的占用率或输入输出(input/output,io)的吞吐率表示。确定终端设备的***负载的方式具体可以是,读取***中cpu或io的节点,以此获取终端设备的***负载。然后终端设备可以根据***负载进行动态调节,即动态调节内存碎片的整理算法,分级实现内存碎片整理,提高终端设备进行内存碎片整理的效率,且降低进行内存碎片整理对终端设备上运行的应用场景的影响。具体地,若***负载处于第一预设范围,则终端设备确定内存碎片整理算法为深度内存碎片整理算法,即执行步骤708;若***负载处于第二预设范围,则终端设备确定内存碎片整理算法为中度内存碎片整理算法,即执行步骤707;或若***负载处于第三预设范围,则终端设备确定内存碎片整理算法为轻度内存碎片整理算法,即执行步骤706。
具体的分级内存整理算法如表3中所示:
Figure PCTCN2019082098-appb-000003
表3
根据表3可知,具体地,当***负载<20%时,此时***负载不高,此时进行深度内存碎片整理算法不影响终端设备上当前运行的应用场景或其他应用场景的运行,深度整理算法包括对不可移动页面密集区、不可移动页面普通区以及可移动页面区进行内存碎片整理;当***负载处于20%~40%时,终端设备进行中度内存碎片整理算法,相比深度内存碎片整理算法减少了不可移动页面密集区的整理,以降低进行内存碎片整理时***的负载,可以避免影响终端设备上当前运行的应用场景或其他应用场景运行的效率;当***负载处于40%~60%时,此时***较为繁忙,可以进行轻度内存碎片整理,仅整理可移动页面区,以降低内存碎片整理对终端设备上当前运行的应用场景或其他应用场景的影响;当***负载为>60%时,此时终端设备的***繁忙,可以不进行内存碎片整理,以避免影响终端设备上正在运行的应用场景。
需要说明的是,除了第一预设范围可以是<20%,第二预设范围为20%~40%,第三预设范围为40%~60%,第一预设范围、第二预设范围与第三预设范围还可以是其他值,具体可根据实际设计需求调整,此处不作限定。
在本申请实施例中,可以根据***负载确定不同的内存碎片整理算法,减少对终端设备上当前运行的应用场景或其他应用场景的影响,使终端设备上的应用场景正常运行,同时可以整理出可用的连续内存,提高终端设备进行内存切换时的效率。
706、执行轻度内存碎片整理算法。
当终端设备上的***负载处于第三预设范围时,终端设备执行轻度内存碎片整理算法,对可移动页面区进行整理。其中,进行内存碎片整理的步骤与前述步骤704中快速整理连续内存中的轻度内存碎片整理算法类似,具体此处不再赘述。
在本申请实施例中,当***负载处于第三预设范围时,执行轻度内存碎片整理算法,该第三预设范围可以是***负载较高的情况下,可以避免对终端设备上正在运行的其他应用场景的影响。
707、执行中度内存碎片整理算法;
当终端设备上的***负载处于第二预设范围时,执行中度内存碎片整理算法,对不可移动页面普通区以及可移动页面区进行内存碎片整理。其中,对于可移动页面区的整理方式与前述步骤704中快速整理连续内存中的轻度内存碎片整理算法类似,具体此处不再赘 述。在预置的单位范围内,不可移动页面大于0且不超过密集阈值,则认为该单位范围内为不可移动普通区。例如,若在1024个页面中,不可移动页面不超过100个,且大于0个,则可以认为该1024个页面属于不可移动页面普通区。对不可移动页面普通区的内存整理具体可以如图9所示,将不可移动页面普通区中的可移动页面进行整理,使可移动页面处于连续的内存页面中,以此整理出连续页面。其中,可以将可移动页面可以移动到空闲的连续内存中,也可以移动到与不可移动页面相邻的连续内存中,具体可根据实际设计需求调整,此处不作限定。
因此,在本申请实施例中,当终端设备的***负载处于第二预设范围时,此时终端设备处于***负载适中的情况,可以进行中度内存碎片整理算法,仅对不可移动页面普通区以及可移动页面区进行整理,以适应中度设备的***负载,提高终端设备的运行效率,且可以提前整理出可用连续内存。
在本申请实施例中,对不可移动页面进行整理,避免***长时间运行后,因不可移动页面增加而降低内存碎片整理的效率以及成功率,可以提高终端设备进行内存碎片整理的效率以及成功率。
708、执行深度内存碎片整理算法。
当终端设备上的***负载处于第一预设范围时,终端设备可以执行深度内存碎片整理算法,包括对不可移动页面密集区、不可移动页面普通区以及可移动页面区进行内存碎片整理。其中对可移动页面区进行内存碎片整理与前述步骤704中快速整理连续内存中的轻度内存碎片整理算法类似,其中对不可移动页面普通区进行内存碎片整理与前述步骤707中的中度内存碎片整理算法类似,具体此处不再赘述。对不可移动页面密集区的内存碎片整理的具体可以如图10所示,可以将不可移动页面密集区的可移动页面移动到不可移动页面密集区的空闲内存上,以增加终端设备上可用连续内存。具体地,可以将不可移动页面密集区的可移动页面移动到不可移动页面之间间隔的空闲内存页面上。当同时对可移动页面区、不可移动页普通区以及不可移动页面密集区进行整理时,可以将可移动页面移动到不可移动页面之间间隔的空闲内存页面上,以整理出更多的空闲内存页面。且不可移动页面普通区以及不可移动页面密集区进行整理,避免***长时间运行后,因不可移动页面增加而降低内存碎片整理的效率以及成功率。可以降低终端设备在***长时间运行后的内存碎片严重化程度,可以提高终端设备进行内存碎片整理的效率以及成功率。
此外,在本申请实施例中,当终端设备从第一应用场景切换到第二应用场景时,若终端设备上的可用连续内存不足以启动或运行该第二应用场景,则终端设备可以进行快速内存整理。例如执行轻度内存碎片整理算法,以使终端设备上的可用连续内存满足该第二应用场景所需的连续内存。例如,终端设备当前运行第一应用场景,在对即将切换的第二应用场景进行预测,并得到目标连续内存后,终端设备上因连续内存不足,需要进行内存碎片整理。在进行内存碎片整理时或进行内存碎片整理之前,若终端设备此时切换到第二应用场景,则此时终端设备也可以启用轻度内存碎片整理算法,快速清理出可用内存,以保障终端设备可以正常启动运行第二应用场景。
在本申请实施例中,通过预测的方式确定目标连续内存后,再主动进行内存碎片整理, 以清理出不小于目标连续内存的可用连续内存,以保障终端设备可以正常切换应用场景。在进行内存碎片整理时,首先进行快速内存碎片整理,以快速得到可用连续内存,保障终端设备切换时的连续内存需求。更进一步的,在进行快速内存碎片整理后,若此时还未切换到第二应用场景,则终端设备可以进一步地根据***负载进行内存碎片整理,根据终端设备的***负载对内存整理算法进行动态调整,以进一步提高终端设备上的可用连续内存。避免终端设备在***长时间运行后,不可以移动页增加,内存碎片化严重程度提升,整理出连续内存的成功率低,导致的内存分配速度下降。且可以对终端设备的资源进行合理利用,降低对终端设备上正在运行的应用场景的影响,提高终端设备切换应用场景的效率以及可靠性。
前述对本申请提供的内存管理的方法进行了详细说明,具体地,本申请实施例中的终端设备可以是智能手机、平板电脑、车载移动装置、PDA(Personal Digital Assistant,个人数字助理)、相机或各种穿戴设备等,具体此处不作限定。下面以终端设备中的具体应用场景为例进行更进一步地说明。
请参阅图11,本申请实施例中内存管理的方法的一个具体的切换场景示意图。其中,以该终端设备为智能手机为例,该智能手机中安装了多个应用,其中包括微信以及相机,当用户在使用该智能手机时,可以从微信切换到相机进行拍照。当终端设备从微信切换到相机时,首先进行启动相机,然后进入相机预览,之后才进行相机拍照。其中,在相机预览场景和相机拍照场景都会使用到大量连续内存,若在内存低于一定的阈值才执行内存清理,那么在对相机预览场景和相机拍照场景分配连续内存时,若连续内存不足,此时才进行内存碎片整理将导致等待较长时间分配内存,因此导致终端设备卡顿,影响用户体验。
因此,为提高智能手机的运行效率,本申请提供的内存管理的方法的具体步骤可以包括:
当智能手机当前正在运行微信,此时,终端设备对从微信切换到其他应用场景的次数进行采集,得到从微信切换到相机的切换次数。具体的采集方式可以是,对每次从微信切换到其他应用场景进行记录。例如,从微信切换到相机的次数为100次,从微信切换到应用市场的次数为2次等。然后分别采集智能手机上每个应用或应用场景在启动以及运行时所需的连续内存,包括相机的相机预览场景以及相机拍照场景运行所需的连续内存大小。
具体的采集方式可以是,在智能手机内的内存分配函数中***切换计数变量,对每次内存分配的连续内存进行计数,每次分别在相机准备进入、进入完成和退出时获取一次计数,进入完成和准确进入时采集的连续内存差值即为该相机启动时的连续内存需求,退出和进入完成的差值即为该相机整体连续内存需求。
在对从微信切换到其他应用以及应用场景的次数进行采集时,可以同时更新从微信切换到其他应用或应用场景的次数,以便后续对相机切换信息进行采集。在采集到相机的连续内存后,也可以对相机所需的连续内存进行更新,以使智能手机根据历史记录数据与采集到的数据确定该相机的连续内存需求。
在确定相机的切换信息以及相机的关联信息后,可以对当前从微信切换到其他应用或应用场景的概率,其中,可以确定从微信切换到相机的概率为90%,此时智能手机可以预 测即将切换到相机场景。对于预测相机启动的概率,智能手机启动相机的样本越多,预测的概率越准确,预测的效率也越高。例如,采样样本超过10万条即可在进入微信时即可预测出启动相机的概率,若采样样本仅1条,则只能在进入相机时进行预测。而对于相机连续内存需求,仅需一条样本即可预测。
在智能手机预测即将切换到相机场景后,识别出相机启动以及运行所需的连续内存,包括相机预览以及相机拍照所需的连续内存。随后启动内存碎片整理。首先计算智能手机上当前的可用连续内存,若智能手机上当前的可用连续内存不大于相机启动以及运行所需的连续内存,则智能手机可以快速内存碎片整理,若智能手机上当前的可用连续内存大于相机启动以及运行所需的连续内存,或智能手机在未比较可用连续内存与相机启动以及运行所需的连续内存时,智能手机可计算当前不可移动页面密集区,若当前不可移动页面密集区大于预设值,智能手机当前的内存碎片化程度严重,则智能手机也可以进行后续的内存碎片整理步骤,首先进行快速内存碎片整理。快速内存碎片整理可以是执行轻度内存碎片整理算法对智能手机上的内存碎片进行整理,首先快速整理出相机预览场景所需的连续内存,然后整理出相机进行拍照时所需的连续内存。在快速内存碎片整理完成后,可以继续获取智能手机的***负载,然后根据智能手机的***负载动态调整内存碎片整理算法,例如,在***负载小于20%时,执行深度内存碎片整理算法,对不可移动页面密集区、不可移动页面普通区以及可移动页面区进行整理,其中,具体的整理算法与前述图7中的步骤706-步骤708类似,具体此处不再赘述;若***负载处于20%-40%,此时可以执行中度内存碎片整理算法,对不可移动页面普通区以及可移动页面区进行内存碎片整理;若***负载处于40%-60%,此时可以执行中度内存碎片整理算法,对可移动页面区进行内存碎片整理;若***负载大于60%,则可以不执行内存碎片整理,以避免影响智能手机上的正在运行的应用。
前述对本申请实施例中提供的内存管理的方法进行了详细说明,此外,本申请实施例还提供了实施干内存管理的方法的终端设备,请参阅图12,本申请实施例中终端设备的一个实施例示意图,可以包括:
数据采集模块1201,用于获取从第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率,该第一应用场景为该终端设备当前所运行的应用场景,具体可以用于实现前述图4实施例中步骤401的具体步骤;
连续内存需求识别模块1202,用于根据该切换概率中满足预设条件的切换概率以及该一个或多个第二应用场景中切换概率满足预设条件的每个第二应用场景所需的连续内存确定目标连续内存,具体可以用于实现前述图4实施例中步骤403的具体步骤;
主动内存碎片整理模块1203,若该终端设备上可用的连续内存不大于该目标连续内存,则在该终端设备从第一应用场景切换到该一个或多个第二应用场景中的任一第二应用场景之前,用于根据该目标连续内存进行内存碎片整理,以使该终端设备上可用的连续内存大于该目标连续内存,具体可以用于实现前述图4实施例中步骤406的具体步骤。
在一些可能的实施方式中,该主动内存碎片整理模块1203,具体用于:
根据***负载确定内存碎片整理算法;
根据该内存碎片整理算法以及该目标连续内存进行内存碎片整理;
具体可以用于实现前述图7实施例中步骤705以及相关步骤中的具体步骤。
在一些可能的实施方式中,该主动内存碎片整理模块1203,具体还用于:
若该***负载处于第一预设范围,则确定该内存碎片整理算法为深度内存碎片整理算法;
若该***负载处于第二预设范围,则确定该内存碎片整理算法为中度内存碎片整理算法;或
若该***负载处于第三预设范围,则确定该内存碎片整理算法为轻度内存碎片整理算法;
具体可以用于实现前述图7实施例中步骤705-步骤708中的具体步骤。
在一些可能的实施方式中,该数据采集模块1201,具体用于:
获取从该第一应用场景切换到该一个或多个第二应用场景中每个第二应用场景的历史切换次数;
根据该历史切换次数确定从该第一应用场景切换到该一个或多个第二应用场景中每个第二应用场景的切换概率;
具体可以用于实现前述图5实施例中步骤502中的具体步骤。
在一些可能的实施方式中,该连续内存需求识别模块1202,具体用于:
从该一个或多个第二应用场景中确定该切换概率大于阈值的第二应用场景;
根据该切换概率大于阈值的第二应用场景所需的连续内存确定该目标连续内存,具体可以用于实现前述图5实施例中步骤506中的具体步骤。
在一些可能的实施方式中,该连续内存需求识别模块1202,具体还用于:
若存在多个切换概率大于阈值的第二应用场景,对该多个切换概率大于阈值的第二应用场景中每个第二应用场景的切换概率以及所需的连续内存进行加权运算,以得到该目标连续内存,具体可以用于实现前述图5实施例中步骤506中的具体步骤。
在一些可能的实施方式中,该连续内存需求识别模块1202,具体用于:
该终端设备从所述终端设备从该一个或多个切换概率大于阈值的第二应用场景中确定所需连续内存最大的目标应用场景;
该终端设备将该目标应用场景所需的连续内存作为该目标连续内存,具体可以用于实现前述图5实施例中步骤506中的具体步骤。
在一些可能的实施方式中,该主动内存碎片整理模块1203,还用于:
当该终端设备从该第一应用场景切换到该一个或多个第二应用场景中的其中一个第二应用场景,且该终端设备上的可用连续内存不满足该其中一个第二应用场景所需的连续内存时,该终端设备通过快速内存碎片整理算法对内存碎片进行整理,具体可以用于实现前述图7实施例中步骤704中的具体步骤。
本申请实施例还提供了一种终端设备,如图13所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端设备可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS (Point of Sales,销售终端)、车载电脑等任意终端设备,以终端设备为手机为例:
图13示出的是与本发明实施例提供的终端相关的手机的部分结构的框图。参考图13,手机包括:射频(Radio Frequency,RF)电路1310、存储器1320、输入单元1330、显示单元1340、传感器1350、音频电路1360、无线保真(wireless fidelity,WiFi)模块1370、处理器1380、以及电源1390等部件。本领域技术人员可以理解,图13中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图13对手机的各个构成部件进行具体的介绍:
RF电路1310可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1380处理;另外,将设计上行的数据发送给基站。通常,RF电路1310包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1310还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器1320可用于存储软件程序以及模块,处理器1380通过运行存储在存储器1320的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1320可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1320可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1330可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1330可包括触控面板1331以及其他输入设备1332。触控面板1331,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1331上或在触控面板1331附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1331可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1380,并能接收处理器1380发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1331。除了触控面板1331,输入单元1330还可以包括其他输入设备1313。具体地,其他输入设备1313可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1340可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜 单。显示单元1340可包括显示面板1341,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1341。进一步的,触控面板1331可覆盖显示面板1341,当触控面板1331检测到在其上或附近的触摸操作后,传送给处理器1380以确定触摸事件的类型,随后处理器1380根据触摸事件的类型在显示面板1341上提供相应的视觉输出。虽然在图13中,触控面板1331与显示面板1341是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1331与显示面板1341集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器1350,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1341的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1341和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1360、扬声器1361,传声器1362可提供用户与手机之间的音频接口。音频电路1360可将接收到的音频数据转换后的电信号,传输到扬声器1361,由扬声器1361转换为声音信号输出;另一方面,传声器1362将收集的声音信号转换为电信号,由音频电路1360接收后转换为音频数据,再将音频数据输出处理器1380处理后,经RF电路1310以发送给比如另一手机,或者将音频数据输出至存储器1320以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块1370可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图13示出了WiFi模块1370,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器1380是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1320内的软件程序和/或模块,以及调用存储在存储器1320内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1380可包括一个或多个处理单元;优选的,处理器1380可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1380中。该处理器1380可以执行前述图3至图13中由终端设备执行的具体步骤。
手机还包括给各个部件供电的电源1390(比如电池),优选的,电源可以通过电源管理***与处理器1380逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请图3至图11中各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (18)

  1. 一种内存管理的方法,其特征在于,包括:
    获取终端设备从第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率,所述第一应用场景为所述终端设备当前所运行的应用场景;
    根据所述一个或多个第二应用场景中切换概率满足预设条件的一个或多个第二应用场景所需的连续内存确定目标连续内存;
    若所述终端设备上可用的连续内存小于所述目标连续内存,则在所述终端设备从第一应用场景切换到所述一个或多个第二应用场景中的任一第二应用场景之前,进行内存碎片整理,以使所述终端设备上可用的连续内存大于所述目标连续内存。
  2. 根据权利要求1所述的方法,其特征在于,所述进行内存碎片整理,包括:
    根据所述终端设备的***负载所处的范围确定内存碎片整理算法;
    使用确定的所述内存碎片整理算法对所述终端设备的内存进行内存碎片整理。
  3. 根据权利要求2所述的方法,其特征在于,所述根据***负载所处的范围确定内存碎片整理算法,包括:
    若所述***负载处于第一预设范围,则确定所述内存碎片整理算法为深度内存碎片整理算法;
    若所述***负载处于第二预设范围,则确定所述内存碎片整理算法为中度内存碎片整理算法;或
    若所述***负载处于第三预设范围,则确定所述内存碎片整理算法为轻度内存碎片整理算法。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述获取终端设备从第一应用场景切换到一个或多个第二应用场景的切换概率,包括:
    获取终端设备从所述第一应用场景切换到所述一个或多个第二应用场景中每个第二应用场景的历史切换次数;
    根据所述历史切换次数确定从所述第一应用场景切换到所述一个或多个第二应用场景中每个第二应用场景的切换概率。
  5. 根据权利要求1-4中所述的方法,其特征在于,所述根据所述一个或多个第二应用场景中切换概率满足所述预设条件的一个或多个第二应用场景所需的连续内存确定目标连续内存,包括:
    从所述一个或多个第二应用场景中确定一个或多个切换概率大于阈值的第二应用场景;
    根据所述一个或多个切换概率大于阈值的第二应用场景所需的连续内存确定所述目标连续内存。
  6. 根据权利要求5所述的方法,其特征在于,若从所述一个或多个第二场景中确定切换概率大于所述阈值的第二应用场景有多个,所述根据所述一个或多个切换概率大于所述阈值的第二应用场景所需的连续内存确定所述目标连续内存,包括:
    对所述多个切换概率大于阈值的第二应用场景中每个第二应用场景所需的连续内存进 行加权运算,以得到所述目标连续内存。
  7. 根据权利要求5中所述的方法,其特征在于,所述根据所述一个或多个切换概率大于所述阈值的第二应用场景所需的连续内存确定所述目标连续内存,包括:
    从所述一个或多个切换概率大于所述阈值的第二应用场景中确定所需连续内存最大的目标应用场景;
    将所述目标应用场景所需的连续内存为所述目标连续内存。
  8. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    当所述终端设备从所述第一应用场景切换到所述一个或多个第二应用场景中的其中一个第二应用场景,且所述终端设备上的可用连续内存不满足所述其中一个第二应用场景所需的连续内存时,所述终端设备通过轻度内存碎片整理算法对所述终端设备的内存进行整理。
  9. 一种终端设备,其特征在于,包括:
    数据采集模块,用于获取终端设备从第一应用场景切换到一个或多个第二应用场景中每个第二应用场景的切换概率,所述第一应用场景为所述终端设备当前所运行的应用场景;
    连续内存需求识别模块,用于根据所述一个或多个第二应用场景中切换概率满足预设条件的一个或多个第二应用场景所需的连续内存确定目标连续内存;
    主动内存碎片整理模块,若所述终端设备上可用的连续内存小于所述目标连续内存,则在所述终端设备从第一应用场景切换到所述一个或多个第二应用场景中的任一第二应用场景之前,用于根据所述目标连续内存进行内存碎片整理,以使所述终端设备上可用的连续内存大于所述目标连续内存。
  10. 根据权利要求9所述的终端设备,其特征在于,所述主动内存碎片整理模块,具体用于:
    根据所述终端设备的***负载所处的范围确定内存碎片整理算法;
    使用确定的所述内存碎片整理算法对所述终端设备的内存进行内存碎片整理。
  11. 根据权利要求10所述的终端设备,其特征在于,所述主动内存碎片整理模块,具体用于:
    若所述***负载处于第一预设范围,则确定所述内存碎片整理算法为深度内存碎片整理算法;
    若所述***负载处于第二预设范围,则确定所述内存碎片整理算法为中度内存碎片整理算法;或
    若所述***负载处于第三预设范围,则确定所述内存碎片整理算法为轻度内存碎片整理算法。
  12. 根据权利要求9-11中任一项所述的终端设备,其特征在于,所述数据采集模块,具体用于:
    获取终端设备从所述第一应用场景切换到所述一个或多个第二应用场景中每个第二应用场景的历史切换次数;
    根据所述历史切换次数确定从所述第一应用场景切换到所述一个或多个第二应用场景中每个第二应用场景的切换概率。
  13. 根据权利要求9-12中所述的终端设备,其特征在于,所述连续内存需求识别模块,具体用于:
    从所述一个或多个第二应用场景中确定一个或多个切换概率大于阈值的第二应用场景;
    根据所述一个或多个切换概率大于阈值的第二应用场景所需的连续内存确定所述目标连续内存。
  14. 根据权利要求13所述的终端设备,其特征在于,所述连续内存需求识别模块,具体用于:
    对所述多个切换概率大于阈值的第二应用场景中每个第二应用场景的切换概率以及所需的连续内存进行加权运算,以得到所述目标连续内存。
  15. 根据权利要求13中所述的终端设备,其特征在于,所述连续内存需求识别模块,具体用于:
    所述终端设备从所述一个或多个切换概率大于阈值的第二应用场景中确定所需连续内存最大的目标应用场景;
    将所述目标应用场景所需的连续内存作为所述目标连续内存。
  16. 根据权利要求9-15中任一项所述的终端设备,其特征在于,所述主动内存碎片整理模块,还用于:
    当所述终端设备从所述第一应用场景切换到所述一个或多个第二应用场景中的其中一个第二应用场景,且所述终端设备上的可用连续内存不满足所述其中一个第二应用场景所需的连续内存时,所述终端设备通过轻度内存碎片整理算法对内存碎片进行整理。
  17. 一种终端设备,其特征在于,包括:
    处理器和存储器;
    所述存储器中存储有计算机程序;
    所述处理器执行所述程序时实现权利要求1-8中任一项所述方法的步骤。
  18. 一种计算机可读存储介质,其上存储有指令,其特征在于,所述指令被处理器执行时实现权利要求1-8中任一项所述方法的步骤。
PCT/CN2019/082098 2018-04-13 2019-04-10 一种内存管理的方法以及相关设备 WO2019196878A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810333058.6 2018-04-13
CN201810333058.6A CN110377527B (zh) 2018-04-13 2018-04-13 一种内存管理的方法以及相关设备

Publications (1)

Publication Number Publication Date
WO2019196878A1 true WO2019196878A1 (zh) 2019-10-17

Family

ID=68163011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082098 WO2019196878A1 (zh) 2018-04-13 2019-04-10 一种内存管理的方法以及相关设备

Country Status (2)

Country Link
CN (1) CN110377527B (zh)
WO (1) WO2019196878A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115982060A (zh) * 2021-10-14 2023-04-18 华为技术有限公司 一种内存回收方法及相关装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078405B (zh) * 2019-12-10 2022-07-15 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111444116B (zh) * 2020-03-23 2022-11-25 海信电子科技(深圳)有限公司 存储空间碎片处理方法及装置
CN112925478B (zh) * 2021-01-29 2022-10-25 惠州Tcl移动通信有限公司 相机存储空间控制方法、智能终端及计算机可读存储介质
US11520695B2 (en) * 2021-03-02 2022-12-06 Western Digital Technologies, Inc. Storage system and method for automatic defragmentation of memory
CN113082705B (zh) * 2021-05-08 2023-09-15 腾讯科技(上海)有限公司 游戏场景切换方法、装置、计算机设备及存储介质
CN116661988A (zh) * 2022-12-29 2023-08-29 荣耀终端有限公司 内存的规整方法、电子设备及可读存储介质
CN116400871B (zh) * 2023-06-09 2023-09-19 Tcl通讯科技(成都)有限公司 碎片整理方法、装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020129192A1 (en) * 2001-03-08 2002-09-12 Spiegel Christopher J. Method, apparatus, system and machine readable medium to pre-allocate a space for data
CN1889737A (zh) * 2006-07-21 2007-01-03 华为技术有限公司 一种资源管理的方法和***
CN101013400A (zh) * 2007-01-30 2007-08-08 金蝶软件(中国)有限公司 一种在内存中缓存数据的方法及装置
CN103150257A (zh) * 2013-02-28 2013-06-12 天脉聚源(北京)传媒科技有限公司 一种内存管理方法和装置
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105939416A (zh) * 2016-05-30 2016-09-14 努比亚技术有限公司 移动终端及其应用预启动方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2907625B1 (fr) * 2006-10-18 2012-12-21 Streamezzo Procede de gestion de memoire dans un terminal client,signal programme d'ordinateur et terminal correspondants
CN105701025B (zh) * 2015-12-31 2019-07-23 华为技术有限公司 一种内存回收方法及装置
CN107133094B (zh) * 2017-06-05 2021-11-02 努比亚技术有限公司 应用管理方法、移动终端及计算机可读存储介质
CN107273011A (zh) * 2017-06-26 2017-10-20 努比亚技术有限公司 应用程序快速切换方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020129192A1 (en) * 2001-03-08 2002-09-12 Spiegel Christopher J. Method, apparatus, system and machine readable medium to pre-allocate a space for data
CN1889737A (zh) * 2006-07-21 2007-01-03 华为技术有限公司 一种资源管理的方法和***
CN101013400A (zh) * 2007-01-30 2007-08-08 金蝶软件(中国)有限公司 一种在内存中缓存数据的方法及装置
CN103150257A (zh) * 2013-02-28 2013-06-12 天脉聚源(北京)传媒科技有限公司 一种内存管理方法和装置
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105939416A (zh) * 2016-05-30 2016-09-14 努比亚技术有限公司 移动终端及其应用预启动方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115982060A (zh) * 2021-10-14 2023-04-18 华为技术有限公司 一种内存回收方法及相关装置

Also Published As

Publication number Publication date
CN110377527A (zh) 2019-10-25
CN110377527B (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
WO2019196878A1 (zh) 一种内存管理的方法以及相关设备
US11099900B2 (en) Memory reclamation method and apparatus
CN111061516B (zh) 加速应用的冷启动的方法、装置和终端
US11892953B2 (en) Interprocess communication method and interprocess communications system
WO2019128540A1 (zh) 资源管理方法、移动终端及计算机可读存储介质
WO2018045934A1 (zh) 应用进程的管理方法和终端设备
WO2019137252A1 (zh) 内存处理方法、电子设备、计算机可读存储介质
US10698837B2 (en) Memory processing method and device and storage medium
US10241718B2 (en) Electronic device and method of analyzing fragmentation of electronic device
WO2019137258A1 (zh) 内存处理方法、电子设备及计算机可读存储介质
US20190109767A1 (en) Network bandwidth management method, terminal and computer storage medium
WO2019128598A1 (zh) 应用处理方法、电子设备、计算机可读存储介质
WO2019128537A1 (zh) 应用冻结方法、计算机设备和计算机可读存储介质
WO2019128542A1 (zh) 应用处理方法、电子设备、计算机可读存储介质
CN112559390B (zh) 一种数据写入控制方法及存储设备
CN110554837A (zh) 易疲劳存储介质的智能交换
CN109144723B (zh) 一种分配存储空间的方法和终端
CN112445766A (zh) 一种终端碎片整理方法、装置以及终端
CN110309100B (zh) 一种快照对象生成方法和装置
CN111274160A (zh) 数据存储方法、电子设备及介质
WO2019128569A1 (zh) 应用程序冻结方法、装置、存储介质和终端
US11126546B2 (en) Garbage data scrubbing method, and device
WO2017206851A1 (zh) 安装任务的分配方法及移动终端
CN109508300B (zh) 一种磁盘碎片整理方法、设备及计算机可读存储介质
WO2019128570A1 (zh) 应用程序冻结方法、装置、存储介质和终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19785336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19785336

Country of ref document: EP

Kind code of ref document: A1