CN116185642B - Container memory optimization method and device, storage medium and electronic device - Google Patents

Container memory optimization method and device, storage medium and electronic device Download PDF

Info

Publication number
CN116185642B
CN116185642B CN202310448670.9A CN202310448670A CN116185642B CN 116185642 B CN116185642 B CN 116185642B CN 202310448670 A CN202310448670 A CN 202310448670A CN 116185642 B CN116185642 B CN 116185642B
Authority
CN
China
Prior art keywords
memory
container
target
priority
target container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310448670.9A
Other languages
Chinese (zh)
Other versions
CN116185642A (en
Inventor
王思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Haima Cloud Technology Co ltd
Original Assignee
Anhui Haima Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Haima Cloud Technology Co ltd filed Critical Anhui Haima Cloud Technology Co ltd
Priority to CN202310448670.9A priority Critical patent/CN116185642B/en
Publication of CN116185642A publication Critical patent/CN116185642A/en
Application granted granted Critical
Publication of CN116185642B publication Critical patent/CN116185642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a method and a device for optimizing a container memory, a storage medium and an electronic device, wherein the method comprises the following steps: monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container; triggering and generating a memory release instruction of the target container according to the memory state; and responding to the memory release instruction, and releasing the occupied memory of the target container. The technical problem of memory isolation of the container when the memory is optimized in the host system in the related technology is solved, and the flexibility of memory management and the stability of the host system are improved.

Description

Container memory optimization method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for optimizing a container memory, a storage medium, and an electronic apparatus.
Background
In the related art, with the development of cloud application fields such as cloud games and cloud mobile phones, the realization of higher resource utilization efficiency by using containerized management in an Android host system has gradually become a new technical direction. However, the Android system is not designed for containerization, and many system services do not support containerization schemes, so that errors occur when the Android system operates in a multi-container environment. One of these is the low memory management services (low memory killer daemon, lmkd) of the Android system.
The lmkd in the related art does not support container isolation. The host system (such as an Android system) uses lmkd system service to perform optimization work in a low memory state, and mainly comprises deleting a process according to a certain rule to release available memory. The lmkd only carries out low memory detection and optimization on the whole host Android system in design, and cannot adapt to the environment of a multi-container system. When the algorithm function of the lmkd deleting process is single and the lmkd performs low memory detection and optimization, the process list maintained in the activity manager service is traversed according to the priority, and the low-priority process is deleted (terminated) preferentially to release the memory, so that the mode is relatively inflexible. In practical use environments, it is often necessary to keep certain application processes or system processes alive or to adjust priorities appropriately.
In view of the above problems in the related art, no effective solution has been found yet.
Disclosure of Invention
The embodiment of the invention provides a method and a device for optimizing a container memory, a storage medium and an electronic device.
According to one embodiment of the present invention, there is provided a method for optimizing a container memory, including: monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container; triggering and generating a memory release instruction of the target container according to the memory state; and responding to the memory release instruction, and releasing the occupied memory of the target container.
Optionally, monitoring the memory state of the target container in the operating system includes: searching a target control group in which the target container is located; and calling a vmmpresure function of the target control group to monitor a vmmpresure event in the target control group, wherein the vmmpresure event is used for indicating the memory reclamation state of the target control group.
Optionally, monitoring the memory state of the target container in the operating system includes: searching a target control group in which the target container is located; and calling a pressure stall information psi function of the target control group to monitor a psi event in the target control group, wherein the psi event is used for indicating the memory application state of the target control group.
Optionally, searching the target control group where the target container is located includes: reading a namespace of the target container; and searching a target control group matched with the name space.
Optionally, releasing the occupied memory of the target container includes: judging whether a low memory management (lmkd) process runs in a container system of the target container or not; if the lmkd process runs in the container system, traversing all processes in the proc directory of the container system, and executing the following steps for each process until all the processes in the proc directory are traversed: judging whether the priority of the current first process is lower than a release priority threshold of the lmkd process; and deleting the first process in the container system if the priority of the first process is lower than a release priority threshold.
Optionally, releasing the occupied memory of the target container includes: judging whether an lmkd process runs in a container system of the target container or not; if the lmkd process runs in the container system, traversing all processes in the container system, and executing the following steps for each process until all process traversals are completed: judging whether the current second process is in the preference application set of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; and deleting the second process if the survival priority is lower than the release priority threshold.
Optionally, before reading the survival priority of the second process from the configuration file of the preference application table, the method further includes: configuring a process identifier and a survival priority of the second process based on a user instruction, and covering a system default priority of the second process by adopting the survival priority; using a process identifier of the second process as a key, and using the survival priority as a value to generate a key value pair of the second process; and adding the key value pair to the configuration file.
According to another embodiment of the present invention, there is provided an optimizing apparatus for a container memory, including: the monitoring module is used for monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container; the generation module is used for triggering and generating a memory release instruction of the target container according to the memory state; and the release module is used for responding to the memory release instruction and releasing the occupied memory of the target container.
Optionally, the monitoring module includes: the searching unit is used for searching a target control group where the target container is located; the first calling unit is used for calling the vmmpressure function of the target control group to monitor the vmmpressure event in the target control group, wherein the vmmpressure event is used for indicating the memory reclamation state of the target control group.
Optionally, the monitoring module includes: the searching unit is used for searching a target control group where the target container is located; and the second calling unit is used for calling the pressure stall information psi function of the target control group to monitor psi events in the target control group, wherein the psi events are used for indicating the memory application state of the target control group.
Optionally, the search unit includes: a reading subunit, configured to read a namespace of the target container; and the searching subunit is used for searching the target control group matched with the naming space.
Optionally, the release module includes: the judging unit is used for judging whether the low memory management lmkd process runs in the container system of the target container; a first traversing unit, configured to, if an lmkd process runs in the container system, traverse all processes in a proc directory of the container system, and perform, for each process, the following steps until all processes in the proc directory are traversed: judging whether the priority of the current first process is lower than a release priority threshold of the lmkd process; and deleting the first process in the container system if the priority of the first process is lower than a release priority threshold.
Optionally, the release module includes: a judging unit, configured to judge whether an lmkd process runs in a container system of the target container; the second traversing unit is configured to, if the lmkd process runs in the container system, traverse all the processes in the container system, and perform the following steps on each process until all the process traversals are completed: judging whether the current second process is in the preference application set of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; and deleting the second process if the survival priority is lower than the release priority threshold.
Optionally, the release module further includes: the configuration unit is used for configuring the process identification and the survival priority of the second process based on a user instruction before the second traversing unit reads the survival priority of the second process from the configuration file of the preference application table, and covering the default priority of the second process by adopting the survival priority; a generating unit, configured to generate a key value pair of the second process by using a process identifier of the second process as a key and the survival priority as a value; and the adding unit is used for adding the key value pair to the configuration file.
According to a further embodiment of the invention, there is also provided a storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, the memory state of the target container in the operating system is monitored, wherein the operating system is a host system of the target container, a memory release instruction of the target container is generated according to the memory state trigger, the occupied memory of the target container is released in response to the memory release instruction, the memory management can be carried out by taking the container in the host system as a unit through monitoring the memory state of the target container in the operating system and carrying out targeted release on the occupied memory of the target container, the targeted memory release can be carried out in the operating system running with multiple containers, the granularity of memory optimization is improved, the technical problem of memory isolation of the container when the memory is optimized in the host system in the related art is solved, and the flexibility of the memory management and the stability of the host system are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of a container memory optimization computer according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for optimizing a container memory according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a host system and container in an embodiment of the invention;
fig. 4 is a block diagram of a device for optimizing a container memory according to an embodiment of the present invention.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided in the first embodiment of the present application may be performed in a mobile phone, a tablet, a server, a computer, or a similar electronic terminal. Taking a computer as an example, fig. 1 is a block diagram of a hardware structure of a container memory optimization computer according to an embodiment of the present invention. As shown in fig. 1, the computer may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is merely illustrative and is not intended to limit the configuration of the computer described above. For example, the computer may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for optimizing a container memory in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to the computer via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. In this embodiment, the processor 104 is configured to control the target virtual character to perform a specified operation to complete the game task in response to the man-machine interaction instruction and the game policy. The memory 104 is used to store program scripts for electronic games, configuration information, sound resource information for virtual characters, and the like.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of a computer. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
Optionally, the input/output device 108 further includes a man-machine interaction screen, configured to obtain a man-machine interaction instruction through a man-machine interaction interface, and further configured to present a picture in the virtual scene;
in this embodiment, a method for optimizing a container memory is provided, and fig. 2 is a schematic flow chart of a method for optimizing a container memory according to an embodiment of the present invention, as shown in fig. 2, where the flow chart includes the following steps:
step S202, monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container;
the host system operates on the host, fig. 3 is a schematic diagram of the host system and the containers in the embodiment of the present invention, where the host system of the host has a memory capacity of 8GB, 2 containers (each container has 4 processes) are created on the operating system of the host, each container can only use the memory allocated with 2GB at most, the containers are isolated from each other, and if the memory used exceeds 2GB, the memory of the container can only be reconfigured.
The operating system of this embodiment may be an android system, a LINUX system, or a system capable of running a container (Docker), which is described herein as an example of the android system.
The memory state of the embodiment is used to represent the memory occupancy state of the target container, such as the memory occupancy rate, the memory occupancy capacity, the memory residual capacity, and the like.
Step S204, triggering and generating a memory release instruction of the target container according to the memory state;
optionally, when the memory state indicates that the memory pressure of the target container is relatively high (e.g., higher than a threshold value, a preset pressure level), the memory release instruction of the target container is triggered to be generated.
Step S206, responding to the memory release instruction, and releasing the occupied memory of the target container;
through the steps, the memory state of the target container in the operating system is monitored, wherein the operating system is a host system of the target container, a memory release instruction of the target container is generated according to the memory state trigger, the occupied memory of the target container is released in response to the memory release instruction, the memory state of the target container in the operating system is monitored, the occupied memory of the target container is subjected to targeted release, memory management can be performed by taking the container in the host system as a unit, targeted memory release can be performed in the operating system running with multiple containers, the granularity of memory optimization is improved, the technical problem of memory isolation of the container when the memory is optimized in the host system in the related art is solved, and the flexibility of the memory management and the stability of the host system are improved.
In an operating system such as Android, an lmkd system service is used for detecting the use condition of the memory of the whole system, and the memory is released when the memory is tensed. This release is typically accomplished by deleting low priority processes. lmkd serves as a system service that is started and initialized during the system startup phase. In the Android system, lmkd in a user mode is adopted. During lmkd initialization, different low-memory detection mechanisms are selected and used according to system attribute configuration, and the low-memory detection mechanisms are used for detecting the memory shortage condition of the system, so that the memory optimization (deleting process/terminating process) action is triggered. The embodiment is illustrated with two low memory detection mechanisms vmmpresure and psi (pressure stall information ) mechanisms supported in Android.
In one implementation of this embodiment, the host system employs a vmmerassure mechanism, and monitoring the memory state of the target container in the operating system includes:
s11, searching a target control group in which a target container is located;
s12, a vmmpresure function of the target control group is called to monitor a vmmpresure event in the target control group, wherein the vmmpresure event is used for indicating the memory reclamation state of the target control group.
The present embodiment is implemented by event_control of a memory subsystem in a control group (cgroup) mechanism of the kernel. First, the lmkd process registers different levels of memory events (e.g. LOW, MEDIUM, critics) in event_control on the premise that the mechanism is turned on, and then listens for eventfd to receive LOW memory events from the kernel.
The vmmpresure mechanism attempts to reclaim memory pages in the kernel, if more memory pages fail during reclaiming, the pressure of the memory is larger, and the number or the proportion of the memory pages fail to be reclaimed is monitored to be used as a vmmpresure event. According to the pressure, the pressure is divided into low, medium, critical pressure grades, and the user state is notified through evenfd.
In the implementation of the related art, the situation of multiple containers is not considered, so when the memory pressure of the whole system is high, the event is notified to the lmkd processes in the host system and all container systems at the same time, so that the host system and all container systems perform the process deleting action, and in fact, due to the memory isolation limit of the containers, only some of the containers may be in tension, but other containers with relatively abundant memories are triggered to delete the processes to release the memory, so that an unreasonable operation result appears.
In the initialization stage of the vmmpresure process, the embodiment finds the corresponding cgroup control group by acquiring the namerspace of the current container, and accurately registers the vmmpresure event in the group. In the subsequent monitoring process, the vmmpresure event in the cgroup is accurately monitored, so that accurate monitoring on a single container is realized.
By adopting the scheme of the embodiment, the host system and each container system are not interfered with each other, the container pays attention to the vmmerassure low memory event in the naming space of the container, the memory shortage of the container A is prevented, the other container B with relatively abundant memory is triggered, the process is deleted to release the memory, and the stability of the host device is improved.
In the above embodiment, the vmmerassure mechanism only reflects the pressure when the memory is recovered, but cannot accurately reflect the pressure when the memory is applied, and the event reporting may not be accurate enough. Some versions of Android incorporate a psi mechanism to detect pressure that occurs when memory is applied.
In another implementation of this embodiment, the host system employs a psi mechanism, and monitoring the memory state of the target container in the operating system includes:
s21, searching a target control group in which the target container is located;
S22, calling a pressure stall information psi function of the target control group to monitor a psi event in the target control group, wherein the psi event is used for indicating the memory application state of the target control group.
Optionally, searching the target control group where the target container is located includes: reading the name space of the target container; the target control group matching the namespace is looked up.
The principle of the psi mechanism adopted in the embodiment is to count the duty ratio of the task blocked for waiting for the application of the memory in the last period, and it counts two indexes, namely the duty ratio of a certain task (name) for the application of the memory blocking and the duty ratio of all tasks (full) for the application of the memory blocking. Each index has three more statistical time periods, such as 10 seconds, 60 seconds and 300 seconds, respectively.
The psi value for memory can be viewed by/proc/pressure/memory, for example:
# cat /proc/pressure/memory
some avg10=0.00 avg60=0.00 avg300=0.00 total=304985781
full avg10=0.00 avg60=0.00 avg300=0.00 total=112265598
wherein the second row counts the ratio of the time that a certain task is blocked by the application memory in the last 10 seconds, 60 seconds and 300 seconds to the total time, and total refers to the total time. The third row is the percentage of the time that all tasks are blocked by memory to the total time.
In an implementation of lmkd, a low memory condition is detected by registering low, medium, critical three memory pressure levels with psi and listening for event reports of psi. The definition of three pressure levels in lmkd is:
low: the average blocking time of a certain task in every 1000ms due to the application of the memory reaches 70ms;
medium: the average blocking time of a certain task in every 1000ms due to the application of the memory reaches 100ms;
critical: the average time that all tasks were blocked by application memory in every 1000ms reaches 70ms.
In the related art, the use of the psi mechanism by the lmkd process of the Android system does not consider the scene of multiple containers, and the situation that the host system and all container systems generate all tasks due to memory blocking application are counted together, and the action of triggering the deletion process together is unreasonable.
This embodiment optimizes the implementation logic of the lmkd process, isolating the host and psi statistics of the various container systems. The psi only supports the cgroupv2 mechanism, but does not support the cgroupv1, but the Android system still uses the cgroupv1 to limit memory resources, so that a memory subsystem of the cgroupv1 needs to be disabled in a system starting stage, a corresponding control group of the cgroupv2 is created, and the memory subsystem is added, thereby increasing grouping isolation of the memory psi.
In the embodiment, the psi process registration is initialized in the initialization stage of the lmkd process, the corresponding cgroup is found by acquiring the nalespace of the current container, and the psi event registration is accurately performed in the cgroup. During the subsequent listening process, psi events in the cgroup are also accurately listened to.
By adopting the scheme of the embodiment, the host system and each container system are not interfered with each other, only psi low memory events in the naming space are concerned, the memory shortage of the container A is prevented, the B container with relatively abundant memory is triggered, the process is deleted to release the memory, and the stability of the host equipment is improved.
In one implementation scenario, releasing the occupied memory of the target container includes: judging whether the low memory management lmkd process runs in a container system of the target container; if the lmkd process is running in the container system, traversing all processes in the proc directory of the container system, and executing the following steps for each process until all processes in the proc directory are traversed: judging whether the priority of the current first process is lower than a release priority threshold of the lmkd process; and if the priority of the first process is lower than the release priority threshold, deleting the first process in the container system.
The related art lmkd process starts a memory release process after detecting a low memory event, and deletes the process from low to high (oom _score_adj value is smaller to represent higher priority) according to the priority to release the memory. A process list is maintained in lmkd, which is updated by the activity manager service (AMS, activityManagerService) via socket, i.e. lmkd only releases application processes managed by AMS, and for some non-AMS managed processes it is not possible to release memory.
The scheme of the embodiment is compared and optimized, so that the deep memory release is realized, and when the memory release process is triggered every time, if the memory pressure is larger at this time, the priority to be released is higher (the value of min_score_adj is smaller), and the memory is not released successfully through deleting the AMS management process at this time, the deep memory release process of the embodiment is triggered.
The depth release flow firstly judges whether the current lmkd process runs in the host system, and if the current lmkd process's Namespace is equal to the initial Namespace value in the kernel, the current lmkd process is in the host system.
Traversing/proc directory (the directory stores all processes running in the operating system, including the process of operating system mounting and the process of container mounting), and judging each traversing process:
if the current lmkd process is in the host system, judging whether the Namespace of the current traversing process is equal to the initial Namespace value in the kernel, if not, indicating that the current traversing process belongs to a container, and not operating; otherwise, the current traversing process belongs to the host system and can be selectively released according to the priority information.
If the current lmkd process is in the container system, or the current lmkd process is in the host system and the currently traversed process also belongs to the host system, namely the current traversing process and the lmkd process are in the same nano space, judging whether the priority of the current traversing process is lower than the highest priority to be released currently (oom _score_adj > min_score_adj), if so, indicating that the priority of the current traversing process is relatively lower, and deleting the current traversing process in the depth release, otherwise, continuing traversing the subsequent process by skipping the current traversing process.
By adopting the scheme of the implementation scene, the memory occupied by some low-priority system processes and other processes (such as scripts or local programs started by root users through root authorities, programs started by an adb mode and the like) which are not managed by the AMS can be released, more available memory is provided for the high-priority processes of the system, and the release degree of the memory is improved.
In another implementation scenario, releasing the occupied memory of the target container includes: judging whether the lmkd process runs in a container system of the target container or not; if the lmkd process runs in the container system, traversing all processes in the container system, and executing the following steps for each process until all process traversals are completed: judging whether the current second process is in the preference application set of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; and if the survival priority is lower than the release priority threshold, deleting the second process.
Optionally, before reading the survival priority of the second process from the configuration file of the preference application table, the method further includes: configuring a process identifier and a survival priority of the second process based on the user instruction, and covering a system default priority of the second process by adopting the survival priority; using a process identifier of the second process as a key and the survival priority as a value to generate a key value pair of the second process; the key value pairs are added to the configuration file.
The lmkd process of the embodiment operates all maintained application processes in the process of releasing the memory after detecting the low memory event, so that all application processes have the possibility of being deleted. In a practical scenario, there is a need to keep certain applications alive or to increase the likelihood of survival of certain applications for flexibility. For such cases, the present embodiment provides a preference mechanism for lmkd to write preference applications to preference application profiles. The data format may be defined as follows: each row defines a key value pair of a preference application, the key is a preference application process name, the value is a min_score_adj value (corresponding to survival priority) corresponding to the preference application, and the release of the preference application is only performed when the min_score_adj (corresponding to the release priority threshold of the lmkd process) of the memory released by the current lmkd process is lower than the configured min_score_adj value, that is, the priority to be released is higher than the configured priority of the preference application configured in the configuration file.
The Lmkd process can update the content in the configuration file of the preference application to the preference application table at regular time, so that configuration hot loading and real-time effective are realized. The user can update the configuration file in real time, such as adding and deleting lists of the preference application, adjusting survival priority of a certain preference application, and the like.
When the Lmkd process releases the process in the memory, judging whether the process name to be released is in the preference application table, if so, comparing the min_score_adj released by the Lmkd process with the min_score_adj in the preference application table, and deleting the process to release the memory only when the min_score_adj released by the Lmkd process is lower than the min_score_adj in the preference application table. If the application needs to be completely kept alive, the min_score_adj value of the application in the preference application configuration file is configured to be very low (such as-1000).
By adopting the scheme of the implementation scene, the flexible configuration of the process is realized, the preference application can be flexibly protected when the memory is released, the false deletion of the important process is prevented, and the flexibility of the system is improved.
The scheme of the embodiment provides an improved lmkd implementation, and can support a low-memory detection mechanism of container isolation. The method also provides an improved mode of lmkd memory release, which can additionally search the non-AMS maintained app process and realize deep memory release. The white list mechanism in the process of deleting the process can be realized, the survival probability of the preference process is improved, and the flexibility is improved.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
In this embodiment, an apparatus for optimizing a container memory is further provided, which is used to implement the foregoing embodiments and preferred embodiments, and details thereof have been omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 4 is a block diagram of a device for optimizing a container memory according to an embodiment of the present invention, as shown in FIG. 4, the device includes: a monitoring module 40, a generating module 42, a releasing module 44, wherein,
a monitoring module 40, configured to monitor a memory state of a target container in an operating system, where the operating system is a host system of the target container;
a generating module 42, configured to trigger generation of a memory release instruction of the target container according to the memory state;
and a releasing module 44, configured to release the occupied memory of the target container in response to the memory release instruction.
Optionally, the monitoring module includes: the searching unit is used for searching a target control group where the target container is located; the first calling unit is used for calling the vmmpressure function of the target control group to monitor the vmmpressure event in the target control group, wherein the vmmpressure event is used for indicating the memory reclamation state of the target control group.
Optionally, the monitoring module includes: the searching unit is used for searching a target control group where the target container is located; and the second calling unit is used for calling the pressure stall information psi function of the target control group to monitor psi events in the target control group, wherein the psi events are used for indicating the memory application state of the target control group.
Optionally, the search unit includes: a reading subunit, configured to read a namespace of the target container; and the searching subunit is used for searching the target control group matched with the naming space.
Optionally, the release module includes: the judging unit is used for judging whether the low memory management lmkd process runs in the container system of the target container; a first traversing unit, configured to, if an lmkd process runs in the container system, traverse all processes in a proc directory of the container system, and perform, for each process, the following steps until all processes in the proc directory are traversed: judging whether the priority of the current first process is lower than a release priority threshold of the lmkd process; and deleting the first process in the container system if the priority of the first process is lower than a release priority threshold.
Optionally, the release module includes: a judging unit, configured to judge whether an lmkd process runs in a container system of the target container; the second traversing unit is configured to, if the lmkd process runs in the container system, traverse all the processes in the container system, and perform the following steps on each process until all the process traversals are completed: judging whether the current second process is in the preference application set of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; and deleting the second process if the survival priority is lower than the release priority threshold.
Optionally, the release module further includes: the configuration unit is used for configuring the process identification and the survival priority of the second process based on a user instruction before the second traversing unit reads the survival priority of the second process from the configuration file of the preference application table, and covering the default priority of the second process by adopting the survival priority; a generating unit, configured to generate a key value pair of the second process by using a process identifier of the second process as a key and the survival priority as a value; and the adding unit is used for adding the key value pair to the configuration file.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container;
s2, triggering and generating a memory release instruction of the target container according to the memory state;
s3, responding to the memory release instruction, and releasing the occupied memory of the target container.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container;
s2, triggering and generating a memory release instruction of the target container according to the memory state;
s3, responding to the memory release instruction, and releasing the occupied memory of the target container.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (6)

1. A method for optimizing the memory of a container, comprising:
monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container, and the memory state is used for representing the memory pressure of the target container;
triggering and generating a memory release instruction of the target container according to the memory state, wherein when the memory state represents that the memory pressure of the target container is higher than a threshold value or a preset pressure level, triggering and generating the memory release instruction of the target container;
and responding to the memory release instruction, and releasing the occupied memory of the target container, wherein the releasing the occupied memory of the target container comprises the following steps: judging whether an lmkd process runs in a container system of the target container or not; if the lmkd process runs in the container system, traversing all processes in the container system, and executing the following steps for each process until all process traversals are completed: judging whether the current second process is in a preference application table of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; if the survival priority is lower than the release priority threshold, deleting the second process, and before reading the survival priority of the second process from the configuration file of the preference application table, the method further comprises: configuring a process identifier and a survival priority of the second process based on a user instruction, and covering a system default priority of the second process by adopting the survival priority; using a process identifier of the second process as a key, and using the survival priority as a value to generate a key value pair of the second process; adding the key value pair to the configuration file;
The monitoring of the memory state of the target container in the operating system comprises the following steps: searching a target control group in which the target container is located; invoking a vmmpresure function of the target control group to monitor a vmmpresure event in the target control group, wherein the vmmpresure event is used for indicating the memory reclamation state of the target control group; and/or monitoring the memory state of the target container in the operating system comprises: searching a target control group in which the target container is located; and calling a pressure stall information psi function of the target control group to monitor a psi event in the target control group, wherein the psi event is used for indicating the memory application state of the target control group.
2. The method of claim 1, wherein finding a target control group in which the target container is located comprises:
reading a namespace of the target container;
and searching a target control group matched with the name space.
3. The method of claim 1, wherein releasing the occupied memory of the target container comprises:
judging whether a low memory management (lmkd) process runs in a container system of the target container or not;
if the lmkd process runs in the container system, traversing all processes in the proc directory of the container system, and executing the following steps for each process until all the processes in the proc directory are traversed: judging whether the priority of the current first process is lower than a release priority threshold of the lmkd process; and deleting the first process in the container system if the priority of the first process is lower than a release priority threshold.
4. An apparatus for optimizing the memory of a container, comprising:
the monitoring module is used for monitoring the memory state of a target container in an operating system, wherein the operating system is a host system of the target container, and the memory state is used for representing the memory pressure of the target container;
the generation module is used for triggering and generating a memory release instruction of the target container according to the memory state, wherein when the memory state represents that the memory pressure of the target container is higher than a threshold value or a preset pressure level, the memory release instruction of the target container is triggered and generated;
the release module is configured to respond to the memory release instruction, and release the occupied memory of the target container, where the release module includes: a judging unit, configured to judge whether an lmkd process runs in a container system of the target container; the second traversing unit is configured to, if the lmkd process runs in the container system, traverse all the processes in the container system, and perform the following steps on each process until all the process traversals are completed: judging whether the current second process is in a preference application table of the target container or not; if the second process to be deleted is in the preference application table of the target container, reading the survival priority of the second process from the configuration file of the preference application table; judging whether the survival priority is lower than a release priority threshold of the lmkd process; and deleting the second process if the survival priority is lower than the release priority threshold, wherein the release module further comprises: the configuration unit is used for configuring the process identification and the survival priority of the second process based on a user instruction before the second traversing unit reads the survival priority of the second process from the configuration file of the preference application table, and covering the default priority of the second process by adopting the survival priority; a generating unit, configured to generate a key value pair of the second process by using a process identifier of the second process as a key and the survival priority as a value; an adding unit, configured to add the key value pair to the configuration file;
Wherein, the monitoring module includes: the searching unit is used for searching a target control group where the target container is located; the first calling unit is used for calling a vmmpressure function of the target control group to monitor a vmmpressure event in the target control group, wherein the vmmpressure event is used for indicating the memory reclamation state of the target control group; and/or, the monitoring module comprises: the searching unit is used for searching a target control group where the target container is located; and the second calling unit is used for calling the pressure stall information psi function of the target control group to monitor psi events in the target control group, wherein the psi events are used for indicating the memory application state of the target control group.
5. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 3 when run.
6. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 3.
CN202310448670.9A 2023-04-24 2023-04-24 Container memory optimization method and device, storage medium and electronic device Active CN116185642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310448670.9A CN116185642B (en) 2023-04-24 2023-04-24 Container memory optimization method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310448670.9A CN116185642B (en) 2023-04-24 2023-04-24 Container memory optimization method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN116185642A CN116185642A (en) 2023-05-30
CN116185642B true CN116185642B (en) 2023-07-18

Family

ID=86449297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310448670.9A Active CN116185642B (en) 2023-04-24 2023-04-24 Container memory optimization method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116185642B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878042A (en) * 2015-12-18 2017-06-20 北京奇虎科技有限公司 Container resource regulating method and system based on SLA
US10216512B1 (en) * 2016-09-29 2019-02-26 Amazon Technologies, Inc. Managed multi-container builds
CN111324423A (en) * 2020-03-03 2020-06-23 腾讯科技(深圳)有限公司 Method and device for monitoring processes in container, storage medium and computer equipment
CN114020407A (en) * 2021-10-28 2022-02-08 济南浪潮数据技术有限公司 Container management cluster container group scheduling optimization method, device and equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213539B (en) * 2016-09-27 2021-10-26 华为技术有限公司 Memory recovery method and device
CN108121605B (en) * 2017-12-31 2021-11-16 武汉烽火云创软件技术有限公司 Yann-based cgroup memory control optimization method and system
US11792216B2 (en) * 2018-06-26 2023-10-17 Suse Llc Application layer data protection for containers in a containerization environment
US11900173B2 (en) * 2021-05-18 2024-02-13 Kyndryl, Inc. Container runtime optimization
CN113656182A (en) * 2021-08-23 2021-11-16 北京沃东天骏信息技术有限公司 Memory expansion management method and device, electronic equipment and storage medium
CN115756847A (en) * 2022-11-18 2023-03-07 ***股份有限公司 EPC memory resource management system, method, device, physical machine and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878042A (en) * 2015-12-18 2017-06-20 北京奇虎科技有限公司 Container resource regulating method and system based on SLA
US10216512B1 (en) * 2016-09-29 2019-02-26 Amazon Technologies, Inc. Managed multi-container builds
CN111324423A (en) * 2020-03-03 2020-06-23 腾讯科技(深圳)有限公司 Method and device for monitoring processes in container, storage medium and computer equipment
CN114020407A (en) * 2021-10-28 2022-02-08 济南浪潮数据技术有限公司 Container management cluster container group scheduling optimization method, device and equipment

Also Published As

Publication number Publication date
CN116185642A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN111464355B (en) Method and device for controlling expansion and contraction capacity of Kubernets container cluster and network equipment
CN108683720B (en) Container cluster service configuration method and device
CN105656714B (en) Data acquisition reporting method and device for mobile equipment
CN111555963B (en) Message pushing method and device, electronic equipment and storage medium
KR101781339B1 (en) Method and device for updating client
CN103729300B (en) The management method and relevant apparatus of nonvolatile memory
CN111538563A (en) Event analysis method and device for Kubernetes
CN113656142B (en) Container group pod-based processing method, related system and storage medium
CN112016030B (en) Message pushing method, device, server and computer storage medium
CN112230847B (en) Method, system, terminal and storage medium for monitoring K8s storage volume
CN109597837B (en) Time sequence data storage method, time sequence data query method and related equipment
CN109688094B (en) Suspicious IP configuration method, device, equipment and storage medium based on network security
CN115794549A (en) Method, device and medium for managing and controlling resource occupied by application program
CN107155403B (en) A kind of processing method and VNFM of life cycle events
CN114070755B (en) Virtual machine network flow determination method and device, electronic equipment and storage medium
CN116185642B (en) Container memory optimization method and device, storage medium and electronic device
CN107526690B (en) Method and device for clearing cache
CN110750350B (en) Large resource scheduling method, system, device and readable storage medium
CN106156210B (en) Method and device for determining application identifier matching list
US20170149893A1 (en) Metadata server, network device and automatic resource management method
CN110543357B (en) Method, related device and system for managing application program object
CN107819595B (en) Network slice management device
CN102780570A (en) Achieving method and system for management of cloud computing devices
CN114585035A (en) Voice call method, device and computer readable storage medium
CN114048033A (en) Load balancing method and device for batch running task and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant