CN113626181B - Memory cleaning method, device, equipment and readable storage medium - Google Patents

Memory cleaning method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113626181B
CN113626181B CN202110744436.1A CN202110744436A CN113626181B CN 113626181 B CN113626181 B CN 113626181B CN 202110744436 A CN202110744436 A CN 202110744436A CN 113626181 B CN113626181 B CN 113626181B
Authority
CN
China
Prior art keywords
memory
target
cleaning
message
cpus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110744436.1A
Other languages
Chinese (zh)
Other versions
CN113626181A (en
Inventor
范瑞春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202110744436.1A priority Critical patent/CN113626181B/en
Publication of CN113626181A publication Critical patent/CN113626181A/en
Application granted granted Critical
Publication of CN113626181B publication Critical patent/CN113626181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a memory cleaning method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning; dividing the target memory into memory segments matched with the number of the target CPUs; the memory segments are respectively distributed to each target CPU which participates in memory cleaning; determining the clearing information of each clearing independently by using each target CPU; combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to the clearing engine; and cleaning the target memory based on each cleaning message by using the cleaning engine. In the application, as the number of the target CPUs participating in the memory cleaning is at least 2, and no additional communication is needed between the target CPUs, the memory cleaning efficiency can be greatly improved.

Description

Memory cleaning method, device, equipment and readable storage medium
Technical Field
The present disclosure relates to the field of storage technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for cleaning a memory.
Background
Memory cleaning is indispensable in the development and use of computer equipment. For example, in the development process of SSD (Solid State Disk), many services have a need to clear a large DDR memory, such as performing a FORMAT operation on the SSD or performing a TRIM operation on the SSD.
In the existing memory cleaning scheme, in order to easily and accurately control the cleaning progress under the condition of multiple CPUs, when all the CPUs receive a form or TRIM command, the commands need to be forwarded to one of the fixed CPUs, so that the fixed CPU can perform cleaning operation, and other CPUs are in idle states. Therefore, the method of only making one CPU perform the cleaning action cannot fully utilize the advantages of multiple CPUs, and the message interaction among the CPUs is needed, so that the cleaning speed is seriously influenced. Secondly, when a certain fixed CPU sends a message to the clearing engine in the clearing process, the CPU starts the next clearing operation after the clearing engine finishes clearing and returns a completion message, so that the queue depth between the CPU and the clearing engine cannot be fully utilized, and the clearing efficiency is lower.
In summary, how to effectively improve the memory clearing speed is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a memory cleaning method, a device, equipment and a readable storage medium, which enable a plurality of CPUs to simultaneously clean a memory so as to improve the memory cleaning efficiency.
In order to solve the technical problems, the application provides the following technical scheme:
a memory cleaning method, comprising:
under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning;
dividing the target memory into memory segments matched with the number of the target CPUs;
the memory segments are respectively distributed to the target CPUs participating in memory cleaning;
determining the clearing information of each clearing independently by utilizing each target CPU;
combining a message sending queue and a message waiting queue corresponding to each target CPU, and sending each clearing message to a clearing engine;
and cleaning the target memory based on each cleaning message by using the cleaning engine.
Preferably, the determining at least two target CPUs participating in memory cleaning in the case that the target memory needs to be cleaned includes:
under the condition that the target memory needs to be cleaned, acquiring service parameters of each CPU;
and determining at least two target CPUs from the CPUs by utilizing the service parameters.
Preferably, dividing the target memory into memory segments matching the number of the target CPUs includes:
dividing the target memory into memory sections with the lengths matched with the CPU service parameters and the numbers matched with the numbers of the target CPUs.
Preferably, dividing the target memory into memory segments matching the number of the target CPUs includes:
and dividing the target memory average into memory segments matched with the number of the target CPUs.
Preferably, determining, by each of the target CPUs, a clear message for each clear independently, includes:
using each target CPU to independently calculate the clearing information of each clearing based on the corresponding memory segment; the clear message includes a clear length and a clear start position.
Preferably, after the memory segments are respectively allocated to the target CPUs participating in memory cleaning, the method further includes:
acquiring the unclean length corresponding to each memory section;
if the target memory segment with the unclean length being greater than a preset threshold exists, determining that the target memory segment corresponds to the unclean segment;
and replacing the target CPU corresponding to the unclean section.
Preferably, the combining a message sending queue and a message waiting queue corresponding to each target CPU sends each clear message to a clear engine, including:
sending each clear message to the clear engine according to the message sending queue;
receiving a response message fed back by the clearing engine;
clearing the clearing message in the message sending queue by utilizing the response message;
and storing the clearing message to be sent currently into the message waiting queue under the condition that the message sending queue is full.
A memory cleaning device, comprising:
the CPU determining module is used for determining at least two target CPUs participating in memory cleaning under the condition that the target memory needs to be cleaned;
the memory dividing module is used for dividing the target memory into memory segments matched with the number of the target CPUs;
the memory allocation module is used for allocating the memory segments to the target CPUs participating in memory cleaning respectively;
the clearing message determining module is used for independently determining clearing messages of each clearing by utilizing each target CPU;
the message sending module is used for combining a message sending queue and a message waiting queue corresponding to each target CPU and sending each clearing message to the clearing engine;
and the memory cleaning module is used for cleaning the target memory based on each cleaning message by using the cleaning engine.
An electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the memory cleaning method when executing the computer program.
A readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the memory cleaning method described above.
By applying the method provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing the target memory into memory segments matched with the number of the target CPUs; the memory segments are respectively distributed to each target CPU which participates in memory cleaning; determining the clearing information of each clearing independently by using each target CPU; combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to the clearing engine; and cleaning the target memory based on each cleaning message by using the cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in memory cleaning are found out. Segmenting the target memory, and distributing the memory segments to corresponding target CPUs. The respective target CPU can then independently determine the clear message for each clear. Because each target CPU independently determines each corresponding memory segment, communication links for determining the clearing message can be omitted between the CPUs, and the clearing confusion problem can not be caused. The clear message determined by each target CPU can be sent to the clear engine in combination with the message send queue and the message wait queue. The flush engine may quickly complete memory flushing based on the received flush message. Because the number of the target CPUs participating in the memory cleaning is at least 2, and no additional communication is needed between the target CPUs, the memory cleaning efficiency can be greatly improved.
Correspondingly, the embodiment of the application also provides a memory cleaning device, a device and a readable storage medium corresponding to the memory cleaning method, which have the technical effects described above and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is a flowchart illustrating an implementation of a memory cleaning method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a memory cleaning device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to provide a better understanding of the present application, those skilled in the art will now make further details of the present application with reference to the drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a flowchart of a memory cleaning method according to an embodiment of the present application, where the method includes the following steps:
s101, under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning.
In the embodiment of the present application, it may be considered that the target memory needs to be cleaned when performing a FORMAT operation on the SSD or performing a TRIM operation on the SSD. The manner in which the target memory is determined may be based on formatting or cropping the corresponding object.
In a multi-core system, there are often multiple CPUs. In the application, at least 2 target CPUs are adopted to perform memory cleaning in order to improve memory cleaning efficiency. That is, in the embodiment of the present application, rather than fixing a certain CPU to perform memory cleaning, at least 2 target CPUs are determined to participate in memory cleaning.
For example, if there are 4 CPUs in the multi-core system, 2 or 3 target CPUs may be selected from the 4 CPUs at random, or all of the 4 CPUs may be determined as target CPUs.
S102, dividing the target memory into memory segments matched with the number of the target CPUs.
After the target CPU is determined, the target memory can be divided into memory segments matched with the number of the target CPUs. The lengths of the memory segments may or may not be consistent.
For example, if the number of target CPUs is N (N is an integer greater than 1), the target memory may be divided into N segments, or may be 2N segments or greater than N segments. That is, after the target memory is divided, each target CPU may be divided into at least one memory segment.
Preferably, in order to balance the load among the target CPUs, step S102 divides the target memory into memory segments matching the number of target CPUs, and may include: and dividing the target memory average into memory segments matched with the number of the target CPUs. That is, when dividing the target memory, the length of each memory segment can be equal. Of course, the number of memory segments may be kept consistent with the number of target CPUs, or the number of memory segments may be an integer multiple of the number of target CPUs, so that the total length of the memory segments responsible for each target CPU is ensured to be consistent.
S103, respectively distributing the memory segments to each target CPU participating in memory cleaning.
After the memory segments are divided, the memory segments can be respectively distributed to each target CPU. That is, at least 2 target CPUs are involved in the cleaning of the target memory, and each target CPU only processes the allocated memory segment, so that the cleaning of the target CPUs can be ensured in order without mutual negotiation.
For example, if the memory segment of the target memory has 3 segments, such as the memory segment 1, the memory segment 2, and the memory segment 3, and the target CPU, the CPU1, and the CPU2, and the CPU3, the memory segment 1 may be allocated to the CPU1, the memory segment 2 may be allocated to the CPU2, and the memory segment 3 may be allocated to the CPU3.
S104, the clearing information of each clearing is independently determined by utilizing each target CPU.
After each target CPU knows the memory segment to be processed, the clearing message of each clearing can be determined by aiming at the memory segment independently.
It should be noted that each cleaning herein is directed to a clean-up engine. Because the purge engine performs a memory purge upon receiving a purge message, each purge message corresponds to a purge of the purge engine. Each target CPU determines the clearing information of each clearing, namely each target CPU determines the clearing information corresponding to the memory segment based on the corresponding memory segment.
Generally, there is an upper limit on the length of the purge engine to purge, so that for the target CPU, the determined number of purge messages is related to the length of the allocated memory segment, the upper limit on the length of each purge of memory by the purge engine. In the present embodiment, the number of clear messages determined by the target CPU is not limited. The erasure length corresponding to each erasure message is not limited.
Specifically, each target CPU is utilized to independently determine a clear message of each cleaning, including: each target CPU is utilized to independently calculate the clearing information of each clearing based on the corresponding memory segment; the clear message includes a clear length and a clear start position. That is, after the memory segments are allocated to the corresponding target CPUs, the target CPUs may be triggered so that each target CPU simultaneously begins to clear the memory segments allocated to itself. Specifically, before sending a cleaning message to the cleaning engine, each target CPU calculates the cleanable length (generally not more than 32 k) at this time, updates the start address of the cleaning and the remaining length to be cleaned according to the cleanable length at this time, and then tells the cleaning engine to perform cleaning operation by the cleaning message. I.e. the clear message carries the clear length and the clear start position.
S105, combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to the clearing engine.
The message sending queue is a queue for sending clear messages, and the message waiting queue temporarily stores the clear messages when the message sending queue is full.
In the embodiment of the application, each target CPU can independently maintain a corresponding message sending queue and a message waiting queue so as to increase the speed of sending the clear message to the clear engine.
Preferably, in one specific embodiment of the present application, in order to speed up the memory purging efficiency, the next purging message may be directly sent without waiting for the response message of the purging engine. In a specific implementation process, step S105 combines the message sending queue and the message waiting queue corresponding to each target CPU, and sends each clear message to the clear engine, which may specifically include:
step one, sending each clearing message to a clearing engine according to a message sending queue;
step two, receiving a response message fed back by the clearing engine;
step three, clearing the clearing information in the information sending queue by using the response information;
and step four, storing the clearing message to be sent currently into a message waiting queue under the condition that the message sending queue is full.
For convenience of description, the following description will be given by combining the above four steps.
After each target CPU sends a clear message to the clear engine, as long as the total remaining length of the memory to be cleared in the allocated memory segment is not 0, the clear engine does not need to wait for a response message, but directly sends the next clear message to the clear engine until the clear message sending queue is filled with the message. The function that the target CPU sends the clear message may be registered as a callback function. The target CPU accurately updates the start address to be cleared and the total remaining length to be cleared each time a clear message is sent.
After the message sending queue is filled with clear messages, the sending is stopped and the last clear message to be sent can be added to the message waiting queue (PENDING LIST), at which point the target CPU can be released. That is, the released target CPU may process the messages of other queues (i.e. the messages corresponding to the non-memory clearing service), may wait for processing all other queue messages, then process PENDING LIST, and after processing PENDING LIST, continue to call the callback function corresponding to the sending clear message, and continue to perform the clear operation, so that the loop is executed until all clear operations are completed.
S106, cleaning the target memory based on each cleaning message by using the cleaning engine.
After the purge engine receives the purge messages, the target memory can be purged based on each purge message. That is, each time a clear message is received, memory cleaning is performed until all clear messages corresponding to the target memory are received and successfully processed by the clear engine, so that a plurality of CPUs can clean the target memory.
By applying the method provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing the target memory into memory segments matched with the number of the target CPUs; the memory segments are respectively distributed to each target CPU which participates in memory cleaning; determining the clearing information of each clearing independently by using each target CPU; combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to the clearing engine; and cleaning the target memory based on each cleaning message by using the cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in memory cleaning are found out. Segmenting the target memory, and distributing the memory segments to corresponding target CPUs. The respective target CPU can then independently determine the clear message for each clear. Because each target CPU independently determines each corresponding memory segment, communication links for determining the clearing message can be omitted between the CPUs, and the clearing confusion problem can not be caused. The clear message determined by each target CPU can be sent to the clear engine in combination with the message send queue and the message wait queue. The flush engine may quickly complete memory flushing based on the received flush message. Because the number of the target CPUs participating in the memory cleaning is at least 2, and no additional communication is needed between the target CPUs, the memory cleaning efficiency can be greatly improved.
It should be noted that, based on the above embodiments, the embodiments of the present application further provide corresponding improvements. The preferred/improved embodiments relate to the same steps as those in the above embodiments or the steps corresponding to the steps may be referred to each other, and the corresponding advantages may also be referred to each other, so that detailed descriptions of the preferred/improved embodiments are omitted herein.
In a specific embodiment of the present application, considering that different CPUs may have different busyness caused by service division and other reasons, in order to ensure memory cleaning efficiency, the relevant service parameters of the CPUs may be combined to determine a target memory participating in memory cleaning. That is, in the case that the target memory needs to be cleaned, step S102 determines at least two target CPUs participating in memory cleaning, which may specifically include:
step one, under the condition that a target memory needs to be cleaned, acquiring service parameters of each CPU;
and step two, determining at least two target CPUs from the CPUs by utilizing the service parameters.
For convenience of description, the two steps are described in combination.
After the target memory is clearly required to be cleaned, the service parameters of each CPU can be acquired first, and the service parameters can be specifically parameters such as CPU occupancy rate and the like, which can mark the busyness degree of the CPU.
After the service parameters are obtained, a plurality of target CPUs with relatively idle service can be screened from the CPUs based on a threshold value. For example, a service parameter threshold may be set, and a CPU whose service parameter is smaller than the service parameter threshold is determined as the target CPU. Of course, it is also possible to sort the CPUs based on the service parameters, and then take a fixed number of CPUs relatively free of service as target CPUs based on the sorting.
Further, when dividing the memory segments, in order to balance the load of the CPU, the target memory may be further divided into memory segments whose length matches the service parameters of the CPU and whose number matches the number of the target CPUs. That is, longer memory segments may be partitioned for relatively idle target CPUs and shorter memory segments may be partitioned for relatively busy target CPUs.
In one embodiment of the present application, it is considered that in practical applications, the service requirements may change with time. In order to ensure the memory cleaning efficiency, the cleaning progress/efficiency of each CPU can be effectively monitored after the memory segments are distributed to the target CPU, so that the memory segment distribution is finely adjusted. That is, after executing the above step S103 to allocate the memory segments to the respective target CPUs involved in the memory cleaning, the following steps are further executed:
step one, acquiring the unclean length corresponding to each memory section;
step two, if a target memory segment with the unclean length being greater than a preset threshold exists, determining that the target memory segment corresponds to the unclean segment;
and step three, replacing the target CPU corresponding to the unclean section.
For convenience of description, the following description will be given by combining the above three steps.
After each memory segment is distributed to the corresponding target CPU, the cleaning progress of each CPU can be effectively monitored. The unclean length corresponding to each memory can be obtained. In this embodiment, a threshold may be set for the unclean length for different periods, and then, after finding a target memory segment with the unclean length greater than a preset threshold, first determining that the unclean segment exists in the target memory segment, and then, distributing the unclean segment to other target CPUs except the target CPU currently distributed. For example, if it is found that 78K of the memory segment corresponding to the CPU1 is not cleaned, and the preset threshold is 50K, the CPU1 may be considered to be relatively busy at this time, and it is difficult to quickly clean the memory segment, and the remaining uncleaned 78K may be allocated to the CPU2 for cleaning.
Corresponding to the above method embodiment, the embodiment of the present application further provides a memory cleaning device, where the memory cleaning device described below and the memory cleaning method described above may be referred to correspondingly.
Referring to fig. 2, the apparatus includes the following modules:
the CPU determining module 101 is configured to determine at least two target CPUs participating in memory cleaning, in a case where the target memory needs to be cleaned;
the memory dividing module 102 is configured to divide the target memory into memory segments matching the number of target CPUs;
the memory allocation module 103 is configured to allocate the memory segments to respective target CPUs involved in memory cleaning;
a clear message determining module 104, configured to determine, by using each target CPU, a clear message of each clear independently;
a message sending module 105, configured to combine a message sending queue and a message waiting queue corresponding to each target CPU, and send each clear message to the clear engine;
the memory cleaning module 106 is configured to clean the target memory based on each cleaning message by using the cleaning engine.
By applying the device provided by the embodiment of the application, under the condition that the target memory needs to be cleaned, at least two target CPUs participating in memory cleaning are determined; dividing the target memory into memory segments matched with the number of the target CPUs; the memory segments are respectively distributed to each target CPU which participates in memory cleaning; determining the clearing information of each clearing independently by using each target CPU; combining the message sending queue and the message waiting queue corresponding to each target CPU, and sending each clearing message to the clearing engine; and cleaning the target memory based on each cleaning message by using the cleaning engine.
In the application, when the target memory needs to be cleaned, at least two target CPUs which need to participate in memory cleaning are found out. Segmenting the target memory, and distributing the memory segments to corresponding target CPUs. The respective target CPU can then independently determine the clear message for each clear. Because each target CPU independently determines each corresponding memory segment, communication links for determining the clearing message can be omitted between the CPUs, and the clearing confusion problem can not be caused. The clear message determined by each target CPU can be sent to the clear engine in combination with the message send queue and the message wait queue. The flush engine may quickly complete memory flushing based on the received flush message. Because the number of the target CPUs participating in the memory cleaning is at least 2, and no additional communication is needed between the target CPUs, the memory cleaning efficiency can be greatly improved.
In a specific embodiment of the present application, the CPU determining module 101 is specifically configured to obtain service parameters of each CPU when the target memory needs to be cleaned; and determining at least two target CPUs from the CPUs by using the service parameters.
In one embodiment of the present application, the memory partitioning module 102 is specifically configured to partition the target memory into memory segments with lengths matching the service parameters of the CPU and numbers matching the number of target CPUs.
In one embodiment of the present application, the memory dividing module 102 is specifically configured to divide the target memory into memory segments that match the number of target CPUs.
In a specific embodiment of the present application, the clear message determining module 104 is specifically configured to independently calculate, by using each target CPU, a clear message of each cleaning based on a memory segment corresponding to each target CPU; the clear message includes a clear length and a clear start position.
In a specific embodiment of the present application, further comprising:
the dynamic adjustment module is used for acquiring the unclean length corresponding to each memory segment after the memory segments are respectively distributed to each target CPU participating in memory cleaning; if the target memory segment with the unclean length being greater than the preset threshold exists, determining that the target memory segment corresponds to the unclean segment; and replacing the target CPU corresponding to the unclean section.
In one embodiment of the present application, the message sending module 105 is specifically configured to send each clear message to the clear engine according to the message sending queue; receiving a response message fed back by the clearing engine; clearing the clearing information in the information sending queue by using the response information; in the case that the message transmission queue is full, the clear message to be currently transmitted is stored in the message waiting queue.
Corresponding to the above method embodiment, the embodiment of the present application further provides an electronic device, and an electronic device described below and a memory cleaning method described above may be referred to correspondingly.
Referring to fig. 3, the electronic device includes:
a memory 332 for storing a computer program;
the processor 322 is configured to implement the steps of the memory cleaning method of the above method embodiment when executing the computer program.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of a specific structure of an electronic device according to the present embodiment, where the electronic device may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Wherein the memory 332 may be transient storage or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a series of instruction operations in the data processing apparatus. Still further, the central processor 322 may be configured to communicate with the memory 332 and execute a series of instruction operations in the memory 332 on the electronic device 301.
The electronic device 301 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input/output interfaces 358, and/or one or more operating systems 341.
The steps in the memory cleaning method described above may be implemented by the structure of the electronic device.
Corresponding to the above method embodiments, the embodiments of the present application further provide a readable storage medium, where a readable storage medium described below and a memory cleaning method described above may be referred to correspondingly.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the memory cleaning method of the above method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, and the like.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of this application.

Claims (9)

1. The memory cleaning method is characterized by comprising the following steps:
under the condition that a target memory needs to be cleaned, determining at least two target CPUs participating in memory cleaning;
dividing the target memory into memory segments matched with the number of the target CPUs;
the memory segments are respectively distributed to the target CPUs participating in memory cleaning;
determining the clearing information of each clearing independently by utilizing each target CPU;
combining a message sending queue and a message waiting queue corresponding to each target CPU, and sending each clearing message to a clearing engine;
cleaning the target memory based on each cleaning message by using the cleaning engine;
the dividing the target memory into memory segments matched with the number of the target CPUs includes:
and dividing the target memory average into memory segments matched with the number of the target CPUs.
2. The memory cleaning method according to claim 1, wherein determining at least two target CPUs involved in memory cleaning in the case where the target memory needs to be cleaned comprises:
under the condition that the target memory needs to be cleaned, acquiring service parameters of each CPU;
and determining at least two target CPUs from the CPUs by utilizing the service parameters.
3. The memory cleaning method according to claim 2, wherein dividing the target memory into memory segments matching the number of the target CPUs includes:
dividing the target memory into memory sections with the lengths matched with the CPU service parameters and the numbers matched with the numbers of the target CPUs.
4. The memory cleaning method according to claim 1, wherein determining the cleaning message for each cleaning independently by using each of the target CPUs, respectively, comprises:
using each target CPU to independently calculate the clearing information of each clearing based on the corresponding memory segment; the clear message includes a clear length and a clear start position.
5. The memory cleaning method according to claim 1, wherein after allocating the memory segments to the target CPUs involved in the memory cleaning, respectively, further comprising:
acquiring the unclean length corresponding to each memory section;
if the target memory segment with the unclean length being greater than a preset threshold exists, determining that the target memory segment corresponds to the unclean segment;
and replacing the target CPU corresponding to the unclean section.
6. The memory purging method as set forth in any one of claims 1 to 5, wherein the combining a message transmission queue and a message waiting queue corresponding to each of the target CPUs, transmitting each of the purge messages to a purge engine, includes:
sending each clear message to the clear engine according to the message sending queue;
receiving a response message fed back by the clearing engine;
clearing the clearing message in the message sending queue by utilizing the response message;
and storing the clearing message to be sent currently into the message waiting queue under the condition that the message sending queue is full.
7. A memory cleaning device, comprising:
the CPU determining module is used for determining at least two target CPUs participating in memory cleaning under the condition that the target memory needs to be cleaned;
the memory dividing module is used for dividing the target memory into memory segments matched with the number of the target CPUs;
the memory allocation module is used for allocating the memory segments to the target CPUs participating in memory cleaning respectively;
the clearing message determining module is used for independently determining clearing messages of each clearing by utilizing each target CPU;
the message sending module is used for combining a message sending queue and a message waiting queue corresponding to each target CPU and sending each clearing message to the clearing engine;
the memory cleaning module is used for cleaning the target memory based on the cleaning messages by using the cleaning engine;
the memory dividing module is specifically configured to divide the target memory into memory segments that match the number of the target CPUs on average.
8. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the memory cleaning method according to any one of claims 1 to 6 when executing the computer program.
9. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the memory cleaning method according to any of claims 1 to 6.
CN202110744436.1A 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium Active CN113626181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110744436.1A CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110744436.1A CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113626181A CN113626181A (en) 2021-11-09
CN113626181B true CN113626181B (en) 2023-07-25

Family

ID=78378820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110744436.1A Active CN113626181B (en) 2021-06-30 2021-06-30 Memory cleaning method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113626181B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101055533A (en) * 2007-05-28 2007-10-17 中兴通讯股份有限公司 Multithreading processor dynamic EMS memory management system and method
CN101178679A (en) * 2007-12-14 2008-05-14 华为技术有限公司 EMS memory checking method and system in multi-nucleus system
CN101178669A (en) * 2007-12-13 2008-05-14 华为技术有限公司 Resource recovery method and apparatus
CN102799471A (en) * 2012-05-25 2012-11-28 上海斐讯数据通信技术有限公司 Method and system for process recycling of operating system
CN103544063A (en) * 2013-09-30 2014-01-29 三星电子(中国)研发中心 Method and device for removing processes applied to Android platform
CN103581008A (en) * 2012-08-07 2014-02-12 杭州华三通信技术有限公司 Router and software upgrading method thereof
CN105205409A (en) * 2015-09-14 2015-12-30 浪潮电子信息产业股份有限公司 Method for preventing data leakage during memory multiplexing and computer system
CN107315622A (en) * 2017-06-19 2017-11-03 杭州迪普科技股份有限公司 A kind of method and device of cache management
CN110069422A (en) * 2018-01-23 2019-07-30 普天信息技术有限公司 Core buffer recovery method based on MIPS multi-core processor
CN112286688A (en) * 2020-11-05 2021-01-29 北京深维科技有限公司 Memory management and use method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519784B2 (en) * 2006-03-31 2009-04-14 Lenovo Singapore Pte. Ltd. Method and apparatus for reclaiming space in memory

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101055533A (en) * 2007-05-28 2007-10-17 中兴通讯股份有限公司 Multithreading processor dynamic EMS memory management system and method
CN101178669A (en) * 2007-12-13 2008-05-14 华为技术有限公司 Resource recovery method and apparatus
CN101178679A (en) * 2007-12-14 2008-05-14 华为技术有限公司 EMS memory checking method and system in multi-nucleus system
CN102799471A (en) * 2012-05-25 2012-11-28 上海斐讯数据通信技术有限公司 Method and system for process recycling of operating system
CN103581008A (en) * 2012-08-07 2014-02-12 杭州华三通信技术有限公司 Router and software upgrading method thereof
CN103544063A (en) * 2013-09-30 2014-01-29 三星电子(中国)研发中心 Method and device for removing processes applied to Android platform
CN105205409A (en) * 2015-09-14 2015-12-30 浪潮电子信息产业股份有限公司 Method for preventing data leakage during memory multiplexing and computer system
CN107315622A (en) * 2017-06-19 2017-11-03 杭州迪普科技股份有限公司 A kind of method and device of cache management
CN110069422A (en) * 2018-01-23 2019-07-30 普天信息技术有限公司 Core buffer recovery method based on MIPS multi-core processor
CN112286688A (en) * 2020-11-05 2021-01-29 北京深维科技有限公司 Memory management and use method, device, equipment and medium

Also Published As

Publication number Publication date
CN113626181A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
EP3637733B1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US11526276B2 (en) Upgrade management method and scheduling node, and storage system
US10866832B2 (en) Workflow scheduling system, workflow scheduling method, and electronic apparatus
CN110389843B (en) Service scheduling method, device, equipment and readable storage medium
CN113382077B (en) Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium
US10216593B2 (en) Distributed processing system for use in application migration
CN112261125B (en) Centralized unit cloud deployment method, device and system
CN116663639B (en) Gradient data synchronization method, system, device and medium
CN113626181B (en) Memory cleaning method, device, equipment and readable storage medium
CN113760549A (en) Pod deployment method and device
CN117435324A (en) Task scheduling method based on containerization
CN113849295A (en) Model training method and device and computer readable storage medium
JP5577745B2 (en) Cluster system, process allocation method, and program
JP2007102332A (en) Load balancing system and load balancing method
CN115941604A (en) Flow distribution method, device, equipment, storage medium and program product
CN115878309A (en) Resource allocation method, device, processing core, equipment and computer readable medium
CN113347238A (en) Message partitioning method, system, device and storage medium based on block chain
CN112114971A (en) Task allocation method, device and equipment
CN114546393A (en) Multitask program compiling method and device and multi-core chip
CN114584567B (en) Block chain-based batch operation processing method and device
CN112631743B (en) Task scheduling method, device and storage medium
JP2006277675A (en) Data processor, data processing server, data processing system, and data processing program
CN115167973B (en) Data processing system of cloud computing data center
CN114465958B (en) Input and output control method, device and medium
CN117931595A (en) Pressure testing method, device, equipment and storage medium based on JMeter clusters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant