CN113296940B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN113296940B
CN113296940B CN202110362007.8A CN202110362007A CN113296940B CN 113296940 B CN113296940 B CN 113296940B CN 202110362007 A CN202110362007 A CN 202110362007A CN 113296940 B CN113296940 B CN 113296940B
Authority
CN
China
Prior art keywords
page
memory
memory page
descriptor
compressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110362007.8A
Other languages
Chinese (zh)
Other versions
CN113296940A (en
Inventor
朱延海
任镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Innovation Co
Original Assignee
Alibaba Singapore Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Singapore Holdings Pte Ltd filed Critical Alibaba Singapore Holdings Pte Ltd
Publication of CN113296940A publication Critical patent/CN113296940A/en
Application granted granted Critical
Publication of CN113296940B publication Critical patent/CN113296940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification provides a data processing method and a device, wherein the data processing method comprises the following steps: receiving a data processing request aiming at a memory page, and acquiring and scanning a memory page list based on the data processing request; determining at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor; creating a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor; copying the first page descriptor and the second page descriptor from the memory page to be compressed, storing the first page descriptor and the second page descriptor in the data management structure, and deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.

Description

Data processing method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to two data processing methods. One or more embodiments of the present specification relate to both data processing apparatus, a computing device, and a computer-readable storage medium.
Background
With the vigorous development of big data, massive data are generated, so that the requirement on the data storage space is increased, based on the fact, the existing processing mode is to compress the memory data, one is to compress the memory data and store the memory data into an external memory, more physical memory is moved, the other is to select a cold memory page frame and compress the data in the page frame so as to increase the free memory, and processing the memory data consumes more computing resources, delay the access of a user and cause the access efficiency to be low.
Disclosure of Invention
In view of this, the present specification embodiments provide two data processing methods. One or more embodiments of the present specification are directed to two data processing apparatuses, a computing device, and a computer-readable storage medium that address the technical shortcomings of the prior art.
According to a first aspect of embodiments of the present specification, there is provided a data processing method, including:
receiving a data processing request aiming at a memory page, and acquiring and scanning a memory page list based on the data processing request;
determining at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor;
Creating a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor;
copying the first page descriptor and the second page descriptor from the memory page to be compressed, storing the first page descriptor and the second page descriptor in the data management structure, and deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.
According to a second aspect of embodiments of the present specification, there is provided another data processing method comprising:
receiving a data processing request for a memory page, and distributing a data storage structure for the memory page based on the data processing request;
acquiring a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the page table;
acquiring a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and determining at least one third page descriptor based on the second page descriptor;
storing the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
According to a third aspect of embodiments of the present specification, there is provided a data processing apparatus comprising:
the first receiving module is configured to receive a data processing request for a memory page, and acquire and scan a memory page list based on the data processing request;
the first determining module is configured to determine at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor;
a creation module configured to create a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor;
and a first storage module configured to copy the first page descriptor and the second page descriptor from the memory page to be compressed, store the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed, and delete the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.
According to a fourth aspect of embodiments of the present specification, there is provided another data processing apparatus comprising:
A second receiving module configured to receive a data processing request for a memory page, and allocate a data storage structure for the memory page based on the data processing request;
the second determining module is configured to acquire a page table corresponding to the memory page, and determine a structure address of a data management structure corresponding to the memory page based on the page table;
an acquisition module configured to acquire a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and to determine at least one third page descriptor based on the second page descriptor;
a second storage module configured to store the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
According to a fifth aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions to perform steps of the data processing method.
According to a sixth aspect of embodiments of the present specification, there is provided a computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the steps of any one of the data processing methods.
One embodiment of the present disclosure obtains and scans a memory page list based on a data processing request by receiving the data processing request for a memory page; determining at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor; creating a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor; copying the first page descriptor and the second page descriptor from the memory page to be compressed, storing the first page descriptor and the second page descriptor in the data management structure, deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed, copying and storing the first page descriptor and the second page descriptor in the data management structure, and releasing the page descriptor of the memory page to realize the page descriptor of the memory page to be compressed, so that not only can the memory space be saved, but also more CPU computing resources be further saved, and the computing pressure of a CPU is reduced, thereby improving the data access efficiency when a user accesses data.
Drawings
FIG. 1 is a schematic diagram of a system architecture to which a data processing method according to an embodiment of the present disclosure is applied;
FIG. 2 is a flow chart of a method of data processing provided in one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a data management structure of a data processing method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another data processing method provided by one embodiment of the present disclosure;
FIG. 5 is a process flow diagram of a data processing method according to one embodiment of the present disclosure;
FIG. 6 is a process flow diagram of another data processing method provided by one embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a data processing apparatus according to one embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another data processing apparatus according to one embodiment of the present disclosure;
FIG. 9 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present specification will be explained.
Page frame or page: also called page, is the basic unit of operating system memory management, and a standard page size is 4KB.
Big page: the hugepage is a memory block with 2MB as a unit, and is commonly used for improving performance in the scenes of virtualization, databases and the like compared with a standard page with 4KB, and in the embodiment of the present specification, a large page is a memory page.
Page descriptor: also called struct page, is a structure that manages page frame status information.
Page table: also called page tables, are data structures that translate virtual addresses visible to a process into physical addresses.
Page table entry: also called PTE (Page Table Entry), 64 bits, where bits 0-11 are PTE flags,12-MAXPHYADDR is the physical address, MAXPHYADDR is the maximum address width, relevant to the system setup.
Page fault handling procedure: and a mechanism for allocating the memory according to the requirement, wherein the memory allocation process is triggered when the address does not allocate the memory yet.
Compression: in this context, the process of freeing the remaining page descriptors after calculating and saving certain information is referred to as a compression algorithm rather than in a general sense.
Decompression: the reverse of "compression" reassigns and populates the descriptors as well, restoring the original large page data structure state.
The Linux operating system manages a 4KB page frame with a 64B size page description (struct page), with a management overhead of 1/64. When the server memory is large, the management overhead is considerable, such as 384g,6gb memory. Currently, cloud computing servers generally allocate memory to virtual machines in large pages, where a large page contains 512 standard pages, consuming 512 page descriptors, but it is sufficient to describe and track the final state of the large page with only the first page descriptor (called the head page), and the remaining page descriptors are either in idle state or can be derived from the first page descriptor. Therefore, the application provides a transparent compression method for page descriptors aiming at the characteristics, which can release idle page descriptors, reduce the memory management overhead, and can effectively save the memory of a physical server by reducing the overhead of a large page memory management structure of an operating system, for example, can save about 5G memory for a server with 384G memory.
Memory compression may be achieved in two ways: firstly, general memory compression, namely, after compressing memory data, swapping out the memory data to an external memory, and taking over more physical memory; the disadvantage is that the CPU is consumed and the memory is replaced very slowly when the page fault occurs. Secondly, compressing the large page memory, selecting a cold memory page frame, compressing data in the page frame, and adding an idle memory; the disadvantage is that the CPU is consumed and the memory delay is increased.
It can be seen that the above methods compress the data stored in the page frame, while the present application compresses the page frame descriptor.
In the present specification, two data processing methods are provided, and the present specification relates to two data processing apparatuses, one computing device, and one computer-readable storage medium, one by one, in the following embodiments.
Fig. 1 is a schematic diagram of a system structure to which a data processing method according to an embodiment of the present disclosure is applied.
It should be noted that, the present application uses the characteristics of the memory page management structure to compress the page descriptor, the compression process does not use the compression algorithm consuming the CPU, but directly releases the memory during compression, and restores the information in the page descriptor by copying during decompression, so that considerable memory, for example, 384G memory, can be saved and near 5G memory can be saved on the physical service with larger memory.
In fig. 1, a is a page descriptor compression controller, where the compression controller includes a compression unit a1, a decompression unit a2, B is a memory care process module, C is a page fault handling program module, and when the memory care process determines that the current memory is under tension, the memory care process triggers the compression unit to work, the compression unit starts to scan a memory page list, picks out a memory page to be compressed, copies a first head page descriptor and a first tail page descriptor in page descriptors of the memory page to be compressed, stores the first head page descriptor and the first tail page descriptor in a data storage structure, stores an address of the data storage structure in a page table, and simultaneously releases the page descriptors to implement compression of page descriptors of the memory page.
When the page descriptor of a memory page is compressed, the mark of a page table item needs to be cleared, so that when the kernel needs to access the page descriptor, the kernel is trapped into a page fault processing program, a decompression unit is triggered to work, a first page descriptor and a second page descriptor are acquired according to the stored structure address in the page table, a third page descriptor is determined according to the second page descriptor, the third page descriptor is allocated, and the state of the page descriptor of the memory page before compression is restored.
The data processing method provided in the embodiments of the present disclosure proposes a feature that according to the memory page management structure, the page descriptors except the header page descriptor are free and can be recycled for other uses, the page descriptors of the memory pages are compressed and decompressed, so that the memory management overhead can be reduced.
Referring to fig. 2, fig. 2 shows a flowchart of a data processing method according to an embodiment of the present disclosure, which specifically includes the following steps:
step 202: and receiving a data processing request aiming at the memory page, and acquiring and scanning a memory page list based on the data processing request.
Wherein a data processing request may be understood as a data compression request for a page descriptor by a user; a memory page list may be understood as a list of memory pages storing memory data.
Specifically, the compression unit of the page descriptor compression controller receives a data processing request for a memory page, and obtains and scans a memory page list based on the data processing request.
In practical application, before receiving the data processing request for the memory page, the method further includes:
acquiring a target processing page number aiming at a current memory page under the condition that the memory monitoring system determines that the memory value of the current memory page meets a preset memory condition;
and determining the current processing page number of the current memory page, and receiving a data processing request aiming at the current memory page under the condition that the current processing page number is smaller than the target processing page number.
Specifically, the compression unit of the page descriptor compression controller acquires the target processing page number of the current memory page when receiving the condition that the memory value of the current memory page meets the preset memory condition, and simultaneously determines the current processing page number of the current memory page, and receives a data processing request aiming at the current memory page number when the current processing page number is smaller than the target processing page number.
In practical application, the memory monitoring system kernel wakes up the system memory state regularly through the memory nursing process, can monitor the memory value of the system memory page in real time, when the available memory is lower than the water level, namely when the memory value meets the preset memory condition and exceeds the preset memory threshold value, the system memory is stressed to trigger the compression mechanism to work, attempts to take measures such as recovering the cache and the like to release more available memory, the compression unit starts the compression process, and then the target processing page number can be input in advance by a system administrator, and different target processing page numbers can be set according to different system memory spaces, so that the specification does not limit the system memory excessively; further, after the target processing page number of the current memory page is obtained, the compression unit may determine the current processing page number of the current memory page, and under the condition that the current processing page number is smaller than the target processing page number, it indicates that the compression unit needs to compress the page descriptor of the memory page until the current processing page number reaches the target processing page number, where it needs to be described that the process of compressing the page descriptor of the memory page is a dynamic adjustment process, and the adaptive adjustment may be performed according to different memory spaces of the system, and this embodiment of the present disclosure is only illustrative.
For example, the memory monitoring system determines that the current system memory value is 384G, when the memory monitoring system determines that the remaining free memory is 4G below the system memory water level, the compression unit is triggered to work, further determines that the target processing page number is 5 pages, and the current processing page number is 0, the compression unit works to start the compression process, compresses the page descriptors of the memory pages, until the memory page number of the compressed page descriptors reaches 5, and the compression unit stops the compression work.
Further, the obtaining the target processing page number for the current memory page when the memory monitoring system determines that the memory value of the current memory page meets the preset memory condition includes:
opening a use interface under the condition that the memory monitoring system determines that the memory value of the current memory page meets the preset memory condition;
acquiring a target processing page number aiming at the current memory page based on the using interface;
correspondingly, receiving a data processing request for the current memory page under the condition that the current processing page number is smaller than the target processing page number, including:
and receiving a data processing request aiming at the current memory page based on a using interface under the condition that the current processing page number is smaller than the target processing page number.
Specifically, after the compression unit is triggered to start the compression process, the memory monitoring system starts the use interface, acquires the target processing page number of the current memory page based on the use interface, and receives the data processing request of the current memory page through the use interface when the current processing page number is smaller than the target processing page number, in practical application, the function can be started by starting the command line and adding on through the kernel, or the enabling switch of the compression controller is started by writing 1 in the command line, the enabling switch of the compression controller is closed by writing 0, and the decompression process is triggered when the enabling switch of the compression controller is closed.
The embodiment of the specification realizes the triggering of the compressed page descriptor processing by using the interface by opening the enabling switch so as to realize the acquisition of data.
Step 204: and determining at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor.
The preset algorithm may be understood as an algorithm for screening out memory pages to be compressed from the memory page list, such as a clock-like algorithm.
Specifically, the compression unit starts to scan the memory page list, scans the free page list preferentially, scans the active list secondarily, and each memory page to be compressed includes a first page descriptor, a second page descriptor and at least one third page descriptor.
In practical application, the memory page list comprises a first type memory page list and a second type memory page list;
correspondingly, the determining at least one memory page to be compressed from the memory page list according to the preset algorithm includes:
acquiring at least one memory page to be compressed under the condition that the first type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
Acquiring at least one memory page to be compressed under the condition that the second type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
And under the condition that the first type memory page list comprises at least one memory page to be compressed according to a preset algorithm and the second type memory page list comprises at least one memory page to be compressed, acquiring the at least one memory page to be compressed from the first type memory page list and acquiring the at least one memory page to be compressed from the second type memory page list.
The first type memory page list is a free list, which can be understood as a list which does not store actual memory data, or a free list formed by memory pages which do not use memory data temporarily; the second type of memory page list is an active list, which may be understood as a list in which actual memory data has been stored, or an active list formed by memory pages in which memory data has been recently used.
In practical application, at least one memory page to be compressed can be determined from the idle list according to a clock-like algorithm, PTE bit6 is defined as PTZIP_PTE_YONG, PTE bit9 (reserved bit) is defined as PTZIP_PTE_ MIDDLEAGE, YONG mark refers to a hot page which is just accessed by the page, and MIDDLEAGE marks that the page is a colder page; when one page sets the ptzip_pte_lock flag, the next scan is changed to the ptzip_pte_ MIDDLEAGE flag, but the compressed list is not added; when one page sets the PTZIP_PTE_ MIDDLEAGE flag, the flag is cleared the next time the scan is performed, but the compressed list is not added; only when both flags of a memory page are not available, indicating that the memory page has been unused for some time, a cold page is determined, and a compressed list is added.
It should be noted that, the memory pages to be compressed may be selected from the first type memory page list, the pages may be selected from the second type memory page list, and may be selected from the first type memory page list and the second type memory page list respectively, which is not limited in this description.
According to the data processing method provided by the embodiment of the specification, the memory pages to be compressed are screened out from the memory page list, so that the page descriptors of the memory pages can be compressed conveniently, the compression of memory data is realized, and the memory storage space is saved.
Further, the determining that the first type memory page list includes at least one memory page to be compressed according to a preset algorithm includes:
acquiring a page identifier of each memory page in the first memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty;
correspondingly, the determining that the second type memory page list includes at least one memory page to be compressed according to the preset algorithm includes:
and acquiring a page identifier of each memory page in the second memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty.
Specifically, the compression unit may obtain a page identifier of each memory page in the first memory page list, determine that the memory page is a memory page to be compressed when determining that the page identifier of the memory page is empty, and similarly may obtain a page identifier of each memory page in the second memory page list, and determine that the memory page is a memory page to be compressed when determining that the page identifier of the memory page is empty; in practical application, cold pages are screened out through a clock-like algorithm, and if the cold pages represent that the memory pages have not been used for some time, the memory pages are added into a list of memory pages to be compressed, so that page descriptors of the memory pages in the list to be compressed can be compressed conveniently.
According to the data processing method provided by the embodiment of the specification, through determining the page identification of the memory page in the memory page list, the memory page with the empty page identification is screened out to serve as the memory page to be compressed, so that the subsequent compression processing of the page descriptor of the memory page to be compressed is facilitated, the idle page descriptor is released, and the memory management overhead is reduced.
Further, screening page descriptors with empty page identifiers, wherein the page descriptors of the memory pages are in an idle state, and compressing the page descriptors in the idle state so as to reduce the memory of the system; specifically, the determining, when the page identifier of the memory page is determined to be empty, that the memory page is a memory page to be compressed includes:
Scanning the page identifier of the memory page to obtain an initial page identifier of the memory page;
scanning the page identification of the memory page again based on a preset time interval to acquire a target page identification of the memory page;
and under the condition that the initial page identification and the target page identification are both empty, determining the memory page as the memory page to be compressed.
Specifically, the compression unit scans page identifiers in the memory page list to obtain initial page identifiers of the memory pages, and scans page identifiers of the memory pages again based on a preset time interval to obtain target page identifiers of the memory pages, and it is to be noted that the initial page identifiers and the target page identifiers are opposite, that is, the page identifier of the first scan can be used as the initial page identifier, and the page identifier of the second scan can be used as the target page identifier; under the condition that the initial page identification and the target page identification of the memory page are both empty, the memory page can be determined to be the memory page to be compressed; in practical application, the memory pages to be compressed are screened out as cold pages, wherein the page descriptors of each memory page are in idle states.
According to the data processing method provided by the embodiment of the specification, the memory space is reduced by compressing the screened page descriptors in the idle state.
Step 206: a data management structure is created for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor.
Wherein the first page descriptor may be a head page descriptor, the second page descriptor is a first tail page descriptor, and the third page descriptor is a tail page descriptor and identical to the first tail page descriptor.
The data management structure may be understood as a memory space storing compressed information, such as a memory space storing a first page descriptor and a second page descriptor.
Specifically, a data management structure is created for the screened memory pages to be compressed based on the first page descriptor, the second page descriptor and at least one third page descriptor; it should be noted that, one memory page to be compressed needs 512 page descriptors to be managed, the first page descriptor is a head page descriptor, the rest 511 page descriptors are tail page descriptors, when the system searches and manages one memory page, only the head page descriptor is needed, the tail page descriptor is in a useless state generally, and further, the content of the head page descriptor and the content of the tail page descriptor are different, and the content of the rest tail page descriptors is the same.
In practical application, in the process of compressing the page descriptors of the memory pages to be compressed, the first page descriptors and the second page descriptors need to be stored, so that the page descriptors are filled when the page descriptors of the memory pages to be compressed are decompressed later, the page descriptors of the memory pages to be compressed are restored, and further, a data management structure needs to be created to store and manage the first page descriptors and the second page descriptors.
According to the data processing method provided by the embodiment of the specification, the data processing structure is created for the memory page to be compressed, so that the first page descriptor and the second page descriptor of the memory page to be compressed are stored, and after the subsequent system falls into the page fault processing program, the decompression process is completed according to the first page descriptor and the second page descriptor.
Step 208: copying the first page descriptor and the second page descriptor from the memory page to be compressed, storing the first page descriptor and the second page descriptor in the data management structure, and deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.
Specifically, the compression unit needs to copy the first page descriptor and the second page descriptor from the memory page to be compressed, store the copied first page descriptor and second page descriptor into the created data management structure, and delete the page descriptor of the memory to be compressed, including the first page descriptor, the second page descriptor and at least one third page descriptor.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a data management structure of a data processing method according to an embodiment of the present disclosure.
The left page table in fig. 3 stores page table entry data, copies the first page descriptor and the second page descriptor from 512 page descriptors of the memory page, and stores the first page descriptor and the second page descriptor into the data management structure.
After the first page descriptor and the second page descriptor are stored in the data management structure, the address of the data management structure is stored in a page table corresponding to the memory page, so that the subsequent inquiry of the address of the data management structure is conveniently carried out in the page table; specifically, after copying the first page descriptor and the second page descriptor from the memory page to be compressed and storing the first page descriptor and the second page descriptor in the data management structure, the method further includes:
And acquiring a structure address of the data management structure, storing the structure address into a page table corresponding to the memory page to be compressed, and establishing a mapping relation between the structure address and the page table corresponding to the memory page to be compressed.
The page table is also called a page table, and is a data structure for translating virtual addresses visible to a process into physical addresses.
Specifically, the compression unit needs to acquire a structure address of the data management structure, store the structure address to a page table corresponding to a memory page to be compressed, and establish a mapping relationship between the memory page and the data management structure.
According to the data processing method provided by the embodiment of the specification, the address of the data management structure is determined and stored in the page table corresponding to the memory page, so that the page descriptor of the corresponding memory page can be quickly determined through the page table item.
Further, the obtaining the structure address of the data management structure and storing the structure address to a page table corresponding to the memory page to be compressed includes:
Determining a first structure address of the data management structure, and storing the first structure address into a first page table entry of a page table corresponding to the memory page to be compressed;
and determining a second structure address of the data management structure, and storing the second structure address to a second page table item of a page table corresponding to the memory page to be compressed, wherein the structure address comprises the first structure address and the second structure address.
Wherein the first structure address may be understood as a first part of the data management structure address; the second structure address is understood to be a second part of the data management structure address, and it should be noted that the first part and the second part together form the address of the data management structure.
Specifically, the compression unit determines a first structure address of the data management structure, stores the first structure address into a first page table entry of a page table corresponding to a memory page to be compressed, determines a second structure address of the data management structure, and stores the second structure address into a second page table entry of the page table corresponding to the memory page to be compressed.
In practical application, the data management structure address is a 64-bit address, the 64-bit address is stored in two page table entries, wherein a first structure address is stored in a first page table entry, a second structure address is stored in a second page table entry, for example, 0-31-bit addresses are stored in the first page table entry, 32-63-bit addresses are stored in the second page table entry, it is to be noted that one page table entry can store 64-bit address data, and the 64-bit data management structure address is divided into two page table entries for storage, so that a separate page table entry storage data management structure is not needed, and a page table entry which has already stored a part of the data management structure address can be selected to be continuously stored, so that the memory space storage is maximized.
According to the data processing method provided by the embodiment of the specification, the address of the data management structure can be stored by storing the address of the data management structure into the two page table entries, and the maximization of the storage memory space is realized.
After deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed, clearing page table item identifiers of the memory page to be compressed so as to be convenient for a subsequent processing program which is trapped into a page missing state when a system kernel needs to access the page description, and triggering a decompression process; specifically, after deleting the first page descriptor, the second page descriptor, and at least one third page descriptor of the memory page to be compressed, the method further includes:
and acquiring a page table item identifier of the memory page to be compressed, and setting the page table item identifier to be in a first state.
Specifically, after the compression unit compresses the page descriptor of the memory page to be compressed, the page table entry identifier of the memory page to be compressed is set to be in the first state, in practical application, the page table entry identifier of the memory page to be compressed can be cleared, that is, the page table entry identifier is set to be 0, and then when the system kernel needs to access the page descriptor, the system kernel falls into a page fault processing program to trigger a decompression process, thereby achieving the effect of transparent compression.
After the page table item identifier of the memory page to be compressed is cleared, when a user accesses the memory page, the page descriptor cannot be found in the page table, the process is trapped in a missing page processing program by using a system self mechanism, and then a decompression process is automatically triggered, so that the page descriptor of the memory page to be compressed is recovered, and corresponding data is found in the memory page.
According to the data processing method provided by the embodiment of the specification, the page table item identification of the memory page to be compressed is cleared, so that the decompression processing program is triggered when the page descriptor is accessed later, and the compressed memory page is accessed.
In summary, the data processing method provided in the embodiments of the present disclosure is a process of compressing page descriptors of memory pages to be compressed, and reduces management overhead by compressing idle page descriptors in memory pages, so that memory of a physical server can be effectively saved.
Referring to fig. 4, fig. 4 shows a flowchart of another data processing method according to an embodiment of the present disclosure, which specifically includes the following steps:
step 402: receiving a data processing request for a memory page, and distributing a data storage structure for the memory page based on the data processing request.
The data storage structure may be understood as 8 standard memory pages, and it should be noted that after recovering 512 page descriptors, the memory space occupied by the 512 page descriptors is the same as the space occupied by the 8 standard memory pages.
Specifically, after the decompression unit of the compression controller receives a data processing request for a memory page, 8 standard memory pages are allocated as a data storage structure for the memory page, so as to store 512 page descriptors of the memory page to be compressed.
Before the decompression unit receives a data processing request aiming at a memory page, the decompression unit also needs to receive a data query request aiming at data in the memory page; specifically, before receiving the data processing request for the memory page, the method further includes:
receiving a data inquiry request, and determining a page table corresponding to a memory page according to a memory page identifier of stored data carried in the data inquiry request;
and under the condition that the page table item identification of the memory page is in the first state, judging whether a page table of the memory page has a corresponding structure address, and if so, receiving a data processing request aiming at the memory page.
Specifically, the decompression unit receives a data query request, determines a page table corresponding to a memory page according to a memory page identifier of storage data carried in the data query request, and determines whether a page table of the memory page has a corresponding structure address or not under the condition that a page table item identifier of the memory page is determined to be in a first state, if so, receives a data processing request for the memory page.
In practical application, after the memory page to be compressed is compressed, a page table item identifier is removed, namely the page table item identifier is set to be in a first state, after a data query request is received, the memory page stored by the data to be queried is determined according to the memory page identifier carried in the data query request, and then a page table corresponding to the memory page is determined; under the condition that the page table item identification of the memory page in the page table is determined to be in a first state, judging whether the page table of the memory page has a corresponding structure address, and if so, receiving a data processing request aiming at the memory page.
According to the data processing method provided by the embodiment of the specification, for data inquiry of a user, a page table corresponding to a memory page storing data needs to be determined, if a page table item identification of the memory page is determined to be in a first state in the page table, namely, if the page table item identification is 0, a structure address corresponding to the memory page is determined in the page table, so that subsequent decompression processing is conveniently carried out on page descriptors of the memory page.
Step 404: and acquiring a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the page table.
Specifically, the decompression unit obtains the page table corresponding to the memory page, and determines the structure address of the data management structure corresponding to the memory page in the page table, corresponding to the above embodiment, where the data management structure is a storage structure for storing the first page descriptor and the second page descriptor of the memory page, which is not described in detail in this embodiment.
Further, the obtaining the page table corresponding to the memory page, determining the structure address of the data management structure corresponding to the memory page based on the page table, includes:
and acquiring a first page table entry and a second page table entry in a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the first page table entry and the second page table entry.
Specifically, a first page table entry in a page table corresponding to the memory page stores a part of addresses of the data management structure, and a second page table entry stores another part of addresses of the data management structure, wherein the addresses stored in the first page table entry and the second page table entry are combined, and the combined addresses are the structure addresses of the data management structure.
In practice, the first page table entry and the second page table entry determined in the page table are opposite, and it has been explicitly shown in the above embodiment that the 64-bit address of the data management structure is stored separately in different page table entries, and accordingly, the address of the data management structure needs to be obtained from the different page table entries during the data lookup process.
According to the data processing method provided by the embodiment of the specification, the structure address of the data management structure is determined through different page table entries in the page table, so that the data management structure is determined from the structure address for storing the data management structure, and further the page descriptor corresponding to the memory page is obtained.
Step 406: and acquiring a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and determining at least one third page descriptor based on the second page descriptor.
Specifically, the decompression unit determines the data management structure corresponding to the memory page according to the structure address, and obtains the first page descriptor and the second page descriptor of the memory page from the data management structure, where, as in the above embodiment, the first page descriptor is the head page descriptor, the second page descriptor is the tail page descriptor, and then determines at least one third page descriptor based on the second page descriptor, in practical application, according to 512 page descriptors, 510 third page descriptors are also determined, and all other 511 tail page descriptors except the first page descriptor are the same.
Step 408: storing the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
Specifically, after determining the third page descriptor of the memory page, the first page descriptor, the second page descriptor and at least one third page descriptor are stored in the data storage structure, and in practical application, the recovered 512 page descriptors are stored in 8 standard memory pages of the data storage structure, so as to decompress the compressed page descriptors and restore the state of the memory pages before compression.
After decompressing the compressed memory page, the page table item identifier in the page table item is set to be 1, so that the state before compression is recovered; specifically, after the first page descriptor, the second page descriptor, and the at least one third page descriptor are stored in the data storage structure, the method further includes:
and setting a page table item identifier corresponding to the memory page to be in a second state based on the decompressed first page descriptor, second page descriptor and at least one third page descriptor.
Specifically, for the decompressed memory page, the page table entry identifier corresponding to the memory page is set to be in the second state, and in practical application, the page table entry identifier is set to be 1, so as to restore the state before compression.
According to the data processing method provided by the embodiment of the specification, the page table item identification of the memory page is processed so that whether the page descriptor of the memory page is compressed or not can be conveniently judged according to the state of the memory page identification, and therefore a transparent page descriptor compressing and decompressing method is realized, and economical and efficient operation and use can be achieved.
In summary, the data processing method provided in the embodiments of the present disclosure uses the characteristics of the memory page management structure to compress the page descriptor, where the size of the page descriptor is 64B, and the compression process directly releases the memory during compression instead of using a compression algorithm that consumes very much CPU, and restores the information in the page descriptor by a copy method during decompression.
Referring to fig. 5, fig. 5 shows a flowchart of a processing procedure of a data processing method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 502: the user inputs a preset number of compressed memory pages.
Specifically, the preset number of compressed memory pages is input by a user, and in practical application, the system administrator may input the number of memory pages expected to be compressed, that is, the preset number of compressed memory pages, according to a preset requirement.
Step 504: when the memory nursing process determines that the system memory is tense, the decompression unit is triggered, and the compression logic is adjusted.
Specifically, the kernel of the operating system wakes up periodically to check the memory state of the system through a memory care process (kswapd), and when the available memory is lower than the water level, triggers a shrunker mechanism to work so as to attempt to take measures such as recovering the cache and the like to release more available memory. The ptzip (compression controller) registers a transparent compression processing method with a shinker by means of a shrank mechanism, so that when the system memory is tense, kswapd triggers the compression unit to work.
Step 506: the compression unit determines whether the currently compressed memory pages reach the preset number of compressed memory pages, if not, step 508 is executed, and if yes, the process is ended.
Step 508: the compression unit determines a memory page to be compressed.
Specifically, the compression unit starts to scan the memory page list, preferentially scans the idle list, and secondly scans the active list, because the active large page is easier to be used again, the large page which is just compressed triggers the decompression process; then selecting cold pages by using a clock-like algorithm, and adding a list to be compressed; the clock-like algorithm is implemented as follows: a. defining PTE bit6, namely DIRTY flag, as PTZIP_PTE_YONG, defining PTE bit9 (reserved bit) as PTZIP_PTE_ MIDDLEAGE, YONG flag to refer to the hot page that the page has just been accessed, MIDDLEAGE flag that the page is a colder page; b. when one page sets the ptzip_pte_lock flag, the next scan is changed to the ptzip_pte_ MIDDLEAGE flag, but the compressed list is not added; c. when one page sets the PTZIP_PTE_ MIDDLEAGE flag, the flag is cleared the next time the scan is performed, but the compressed list is not added; d. only when one page has no two marks, which represents that the page has not been used for some time, judging that the page is a cold page, adding a compression list, and determining the memory page to be compressed.
Step 510: the compression unit copies the first page descriptor and the second page descriptor of the memory page to be compressed and places the first page descriptor and the second page descriptor in the data management structure.
Specifically, assigning a ptzip_hpage (data structure) to store compressed information includes: a descriptor of the head page, a descriptor of the first tail page; then, releasing the 512 page descriptor corresponding to the big page, occupying 8 4K pages, storing the 64-bit address of the ptzip_hpage into the first two PTEs of the memory page, storing the low-32-bit address into the high-32-bit of the first PTE, storing the high-32-bit address into the high-32-bit of the second PTE, and directly establishing the mapping relation of the compressed information in the memory page and the data structure.
Step 512: the compression unit releases the page descriptor of the memory page to be compressed.
In practical application, when a page descriptor is compressed, the present flag (page table item identifier) of the page table item where the descriptor is located needs to be cleared, so that when the kernel needs to access the page description, the kernel falls into a missing page processing program to trigger a decompression process, thereby achieving the effect of transparent compression.
According to the data processing method provided by the embodiment of the specification, in order to compress the page descriptors of the memory pages to be compressed, the idle page descriptors in the memory pages are compressed, so that the management overhead is reduced, and the memory of the physical server can be effectively saved.
Referring to fig. 6, fig. 6 shows a flowchart of a processing procedure of another data processing method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 602: the user's data query requests access to page descriptors of corresponding memory pages.
Step 604: the system kernel access triggers the page fault handler.
Specifically, when any other kernel activity accesses the page descriptor of the compressed memory page, a page fault abnormality is caused, and a decompression process is triggered; the transparent compression effect of the patent is that only a large page subsystem perceives the compression and decompression process, and is transparent to other kernel programs using large pages; for example, if the file system uses a large PAGE as metadata cache, it needs to access the PAGE descriptor, and setting the page_privateflag triggers the decompression unit to operate.
Step 606: the decompression unit determines whether the memory page address causing the page fault is stored in the data management structure stored in the compressed page descriptor, if yes, step 608 is executed, and if not, the process is ended.
Step 608: the decompression unit allocates the data storage structure to recover the compressed page descriptors.
Specifically, in the page fault exception processing program, 8 memory pages are allocated first, the PTE address where the memory page is located is obtained from the page fault address, and then the address of the ptzip_hpage (data storage structure) is obtained from the PTE.
Step 610: the decompression unit decompresses the page descriptor.
Specifically, the first tail page descriptor information stored in the ptzip_hpage fills the memory page descriptor, sets a present (page table entry identifier) flag in a page table entry corresponding to the decompressed page description, and restores the state before compression.
According to the data processing method provided by the embodiment of the specification, the page descriptor is compressed by utilizing the characteristics of a memory page management structure, the size of the page descriptor is 64B, the compression process is not a compression algorithm which consumes CPU very, the memory is directly released during compression, and the information in the page descriptor is recovered through a copying method during decompression.
Corresponding to the above method embodiments, the present disclosure further provides an embodiment of a data processing apparatus, and fig. 7 shows a schematic structural diagram of a data processing apparatus according to one embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
a first receiving module 702 configured to receive a data processing request for a memory page, and acquire and scan a memory page list based on the data processing request;
a first determining module 704 configured to determine at least one memory page to be compressed from the memory page list according to a preset algorithm, where each memory page to be compressed includes a first page descriptor, a second page descriptor, and at least one third page descriptor;
A creation module 706 configured to create a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor;
a first storage module 708 configured to copy the first page descriptor and the second page descriptor from the memory page to be compressed, store to the data management structure, and delete the first page descriptor, the second page descriptor, and at least one third page descriptor of the memory page to be compressed.
Optionally, the apparatus further comprises:
acquiring a target processing page number aiming at a current memory page under the condition that the memory monitoring system determines that the memory value of the current memory page meets a preset memory condition;
and determining the current processing page number of the current memory page, and receiving a data processing request aiming at the current memory page under the condition that the current processing page number is smaller than the target processing page number.
Optionally, the apparatus further comprises:
opening a use interface under the condition that the memory monitoring system determines that the memory value of the current memory page meets the preset memory condition;
acquiring a target processing page number aiming at the current memory page based on the using interface;
Correspondingly, receiving a data processing request for the current memory page under the condition that the current processing page number is smaller than the target processing page number, including:
and receiving a data processing request aiming at the current memory page based on a using interface under the condition that the current processing page number is smaller than the target processing page number.
Optionally, the first determining module 704 is further configured to:
acquiring at least one memory page to be compressed under the condition that the first type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
Acquiring at least one memory page to be compressed under the condition that the second type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
And under the condition that the first type memory page list comprises at least one memory page to be compressed according to a preset algorithm and the second type memory page list comprises at least one memory page to be compressed, acquiring the at least one memory page to be compressed from the first type memory page list and acquiring the at least one memory page to be compressed from the second type memory page list.
Optionally, the first determining module 704 is further configured to:
acquiring a page identifier of each memory page in the first memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty;
correspondingly, the determining that the second type memory page list includes at least one memory page to be compressed according to the preset algorithm includes:
and acquiring a page identifier of each memory page in the second memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty.
Optionally, the first determining module 704 is further configured to:
scanning the page identifier of the memory page to obtain an initial page identifier of the memory page;
scanning the page identification of the memory page again based on a preset time interval to acquire a target page identification of the memory page;
and under the condition that the initial page identification and the target page identification are both empty, determining the memory page as the memory page to be compressed.
Optionally, the apparatus further comprises:
and acquiring a structure address of the data management structure, storing the structure address into a page table corresponding to the memory page to be compressed, and establishing a mapping relation between the structure address and the page table corresponding to the memory page to be compressed.
Optionally, the apparatus further comprises:
determining a first structure address of the data management structure, and storing the first structure address into a first page table entry of a page table corresponding to the memory page to be compressed;
and determining a second structure address of the data management structure, and storing the second structure address to a second page table item of a page table corresponding to the memory page to be compressed, wherein the structure address comprises the first structure address and the second structure address.
Optionally, the apparatus further comprises:
and acquiring a page table item identifier of the memory page to be compressed, and setting the page table item identifier to be in a first state.
In the data processing device provided in the embodiments of the present disclosure, in order to compress page descriptors of memory pages to be compressed, overhead is reduced by compressing page descriptors that are idle in memory pages, so that memory of a physical server can be effectively saved.
Corresponding to the above method embodiments, the present disclosure further provides another embodiment of a data processing apparatus, and fig. 8 shows a schematic structural diagram of another data processing apparatus provided in one embodiment of the present disclosure. As shown in fig. 8, the apparatus includes:
A second receiving module 802 configured to receive a data processing request for a memory page, and allocate a data storage structure for the memory page based on the data processing request;
a second determining module 804, configured to obtain a page table corresponding to the memory page, and determine a structure address of a data management structure corresponding to the memory page based on the page table;
an acquisition module 806 configured to acquire a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and determine at least one third page descriptor based on the second page descriptor;
a second storage module 808 is configured to store the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
Optionally, the second determining module 804 is further configured to:
and acquiring a first page table entry and a second page table entry in a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the first page table entry and the second page table entry.
Optionally, the apparatus further comprises:
receiving a data inquiry request, and determining a page table corresponding to a memory page according to a memory page identifier of stored data carried in the data inquiry request;
And under the condition that the page table item identification of the memory page is in the first state, judging whether a page table of the memory page has a corresponding structure address, and if so, receiving a data processing request aiming at the memory page.
Optionally, the apparatus further comprises:
and setting a page table item identifier corresponding to the memory page to be in a second state based on the decompressed first page descriptor, second page descriptor and at least one third page descriptor.
According to the data processing device provided by the embodiment of the specification, the page table item identification of the memory page is processed so that whether the page descriptor of the memory page is compressed or not can be conveniently judged according to the state of the memory page identification, and therefore a transparent page descriptor compressing and decompressing method is realized, and economical and efficient operation and use can be realized.
The above is a schematic scheme of two kinds of data processing apparatuses of the present embodiment. It should be noted that, the technical solution of the data processing apparatus and the technical solution of the data processing method belong to the same conception, and details of the technical solution of the data processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the data processing method.
Fig. 9 illustrates a block diagram of a computing device 900 provided in accordance with one embodiment of the present specification. The components of computing device 900 include, but are not limited to, memory 910 and processor 920. Processor 920 is coupled to memory 910 via bus 930 with database 950 configured to hold data.
Computing device 900 also includes an access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 900 and other components not shown in FIG. 9 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 9 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
Wherein the processor 920 is configured to execute computer-executable instructions for performing steps of the data processing method when the processor executes the computer-executable instructions.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the data processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the data processing method.
An embodiment of the present specification also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the data processing method.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the data processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the data processing method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A data processing method is applied to a first processing unit and comprises the following steps:
receiving a data processing request aiming at a memory page, and acquiring and scanning a memory page list based on the data processing request;
determining at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor;
creating a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor;
copying the first page descriptor and the second page descriptor from the memory page to be compressed, storing the first page descriptor and the second page descriptor in the data management structure, and deleting the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.
2. The data processing method according to claim 1, further comprising, before receiving the data processing request for the memory page:
acquiring a target processing page number aiming at a current memory page under the condition that the memory monitoring system determines that the memory value of the current memory page meets a preset memory condition;
And determining the current processing page number of the current memory page, and receiving a data processing request aiming at the current memory page under the condition that the current processing page number is smaller than the target processing page number.
3. The data processing method according to claim 2, wherein when the memory monitoring system determines that the memory value of the current memory page meets the preset memory condition, the obtaining the target processing page number for the current memory page includes:
opening a use interface under the condition that the memory monitoring system determines that the memory value of the current memory page meets the preset memory condition;
acquiring a target processing page number aiming at the current memory page based on the using interface;
correspondingly, receiving a data processing request for the current memory page under the condition that the current processing page number is smaller than the target processing page number, including:
and receiving a data processing request aiming at the current memory page based on a using interface under the condition that the current processing page number is smaller than the target processing page number.
4. The data processing method according to claim 1 or 2, wherein the memory page list includes a first type memory page list and a second type memory page list;
Correspondingly, the determining at least one memory page to be compressed from the memory page list according to the preset algorithm includes:
acquiring at least one memory page to be compressed under the condition that the first type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
Acquiring at least one memory page to be compressed under the condition that the second type memory page list comprises the at least one memory page to be compressed according to a preset algorithm; or alternatively
And under the condition that the first type memory page list comprises at least one memory page to be compressed according to a preset algorithm and the second type memory page list comprises at least one memory page to be compressed, acquiring the at least one memory page to be compressed from the first type memory page list and acquiring the at least one memory page to be compressed from the second type memory page list.
5. The method of claim 4, wherein determining that the first type of memory page list includes at least one memory page to be compressed according to a predetermined algorithm comprises:
acquiring a page identifier of each memory page in the first type memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty;
Correspondingly, the determining that the second type memory page list includes at least one memory page to be compressed according to the preset algorithm includes:
and acquiring a page identifier of each memory page in the second type memory page list, and determining the memory page as a memory page to be compressed under the condition that the page identifier of the memory page is determined to be empty.
6. The data processing method according to claim 5, wherein in the case that the page identifier of the memory page is determined to be empty, determining that the memory page is a memory page to be compressed includes:
scanning the page identifier of the memory page to obtain an initial page identifier of the memory page;
scanning the page identification of the memory page again based on a preset time interval to acquire a target page identification of the memory page;
and under the condition that the initial page identification and the target page identification are both empty, determining the memory page as the memory page to be compressed.
7. The data processing method according to claim 1 or 2, wherein the first page descriptor and the second page descriptor are copied from the memory page to be compressed and stored in the data management structure, further comprising:
and acquiring a structure address of the data management structure, storing the structure address into a page table corresponding to the memory page to be compressed, and establishing a mapping relation between the structure address and the page table corresponding to the memory page to be compressed.
8. The data processing method according to claim 7, wherein the obtaining the structure address of the data management structure and storing the structure address in the page table corresponding to the memory page to be compressed includes:
determining a first structure address of the data management structure, and storing the first structure address into a first page table entry of a page table corresponding to the memory page to be compressed;
and determining a second structure address of the data management structure, and storing the second structure address to a second page table item of a page table corresponding to the memory page to be compressed, wherein the structure address comprises the first structure address and the second structure address.
9. A data processing method is applied to a second processing unit and comprises the following steps:
receiving a data processing request for a memory page, and distributing a data storage structure for the memory page based on the data processing request;
acquiring a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the page table;
acquiring a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and determining at least one third page descriptor based on the second page descriptor;
Storing the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
10. The data processing method according to claim 9, wherein the obtaining the page table corresponding to the memory page, and determining the structure address of the data management structure corresponding to the memory page based on the page table, includes:
acquiring a first page table entry and a second page table entry in a page table corresponding to the memory page, and determining a structure address of a data management structure corresponding to the memory page based on the first page table entry and the second page table entry;
before receiving the data processing request for the memory page, the method further comprises:
receiving a data inquiry request, and determining a page table corresponding to a memory page according to a memory page identifier of stored data carried in the data inquiry request;
and under the condition that the page table item identification of the memory page is in the first state, judging whether a page table of the memory page has a corresponding structure address, and if so, receiving a data processing request aiming at the memory page.
11. A data processing apparatus comprising:
the first receiving module is configured to receive a data processing request for a memory page, and acquire and scan a memory page list based on the data processing request;
The first determining module is configured to determine at least one memory page to be compressed from the memory page list according to a preset algorithm, wherein each memory page to be compressed comprises a first page descriptor, a second page descriptor and at least one third page descriptor;
a creation module configured to create a data management structure for the memory page to be compressed based on the first page descriptor, the second page descriptor, and at least one third page descriptor;
and a first storage module configured to copy the first page descriptor and the second page descriptor from the memory page to be compressed, store the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed, and delete the first page descriptor, the second page descriptor and at least one third page descriptor of the memory page to be compressed.
12. A data processing apparatus comprising:
a second receiving module configured to receive a data processing request for a memory page, and allocate a data storage structure for the memory page based on the data processing request;
the second determining module is configured to acquire a page table corresponding to the memory page, and determine a structure address of a data management structure corresponding to the memory page based on the page table;
An acquisition module configured to acquire a first page descriptor and a second page descriptor of the memory page from the data management structure based on the structure address, and to determine at least one third page descriptor based on the second page descriptor;
a second storage module configured to store the first page descriptor, the second page descriptor, and at least one third page descriptor to the data storage structure.
13. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions and the processor is configured to execute the computer executable instructions to implement the steps of the data processing method of any of claims 1-8 or 9-10.
14. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the data processing method of any one of claims 1 to 8 or 9 to 10.
CN202110362007.8A 2021-03-31 2021-04-02 Data processing method and device Active CN113296940B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110350252 2021-03-31
CN2021103502527 2021-03-31

Publications (2)

Publication Number Publication Date
CN113296940A CN113296940A (en) 2021-08-24
CN113296940B true CN113296940B (en) 2023-12-08

Family

ID=77319428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110362007.8A Active CN113296940B (en) 2021-03-31 2021-04-02 Data processing method and device

Country Status (1)

Country Link
CN (1) CN113296940B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579253A (en) * 2022-02-24 2022-06-03 阿里巴巴(中国)有限公司 Memory scanning method and device
CN115794397A (en) * 2022-11-29 2023-03-14 阿里云计算有限公司 Cold and hot page management accelerating device and method, MMU, processor and electronic device
CN117130565B (en) * 2023-10-25 2024-02-06 苏州元脑智能科技有限公司 Data processing method, device, disk array card and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1970804A1 (en) * 2007-03-05 2008-09-17 Slipstream Data, Inc. System and method for dynamic memory allocation
CN101458623A (en) * 2007-12-11 2009-06-17 闪联信息技术工程中心有限公司 Method and apparatus for loading multimedia information in UI interface
CN106294190A (en) * 2015-05-25 2017-01-04 中兴通讯股份有限公司 A kind of memory space management and device
WO2019071610A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Method and apparatus for compressing and decompressing memory occupied by processor
CN111352861A (en) * 2020-02-19 2020-06-30 Oppo广东移动通信有限公司 Memory compression method and device and electronic equipment
CN111736980A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Memory management method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9817776B2 (en) * 2015-01-19 2017-11-14 Microsoft Technology Licensing, Llc Memory descriptor list caching and pipeline processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1970804A1 (en) * 2007-03-05 2008-09-17 Slipstream Data, Inc. System and method for dynamic memory allocation
CN101458623A (en) * 2007-12-11 2009-06-17 闪联信息技术工程中心有限公司 Method and apparatus for loading multimedia information in UI interface
CN106294190A (en) * 2015-05-25 2017-01-04 中兴通讯股份有限公司 A kind of memory space management and device
WO2019071610A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Method and apparatus for compressing and decompressing memory occupied by processor
CN110023906A (en) * 2017-10-13 2019-07-16 华为技术有限公司 A kind of method and device compressed and decompress memory shared by processor
CN111736980A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Memory management method and device
CN111352861A (en) * 2020-02-19 2020-06-30 Oppo广东移动通信有限公司 Memory compression method and device and electronic equipment

Also Published As

Publication number Publication date
CN113296940A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN113296940B (en) Data processing method and device
US10990540B2 (en) Memory management method and apparatus
US20200057729A1 (en) Memory access method and computer system
KR100734823B1 (en) Method and apparatus for morphing memory compressed machines
USRE43483E1 (en) System and method for managing compression and decompression of system memory in a computer system
US9087021B2 (en) Peer-to-peer transcendent memory
US9081692B2 (en) Information processing apparatus and method thereof
US20070005911A1 (en) Operating System-Based Memory Compression for Embedded Systems
CN106970881B (en) Hot and cold page tracking and compression recovery method based on large page
US20170004069A1 (en) Dynamic memory expansion by data compression
CN110554837A (en) Intelligent switching of fatigue-prone storage media
CN111651236A (en) Virtual machine memory optimization processing method and related device
EP3812904B1 (en) Swap area in memory using multiple compression algorithms
WO2022151985A1 (en) Virtual memory-based data storage method and apparatus, device, and storage medium
WO2024099448A1 (en) Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium
EP4369191A1 (en) Memory scanning method and apparatus
CN107003940B (en) System and method for providing improved latency in non-uniform memory architectures
US9772776B2 (en) Per-memory group swap device
CN114995993A (en) Memory recovery method and device
CN116107925B (en) Data storage unit processing method
CN106970826B (en) Large page-based missing page abnormity solving method
CN116107509A (en) Data processing method and device and electronic equipment
CN115268767A (en) Data processing method and device
CN112069433A (en) File page processing method and device, terminal equipment and storage medium
CN117251292B (en) Memory management method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40059134

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240304

Address after: # 03-06, Lai Zan Da Building 1, 51 Belarusian Road, Singapore

Patentee after: Alibaba Innovation Co.

Country or region after: Singapore

Address before: Room 01, 45th Floor, AXA Building, 8 Shanton Road

Patentee before: Alibaba Singapore Holdings Ltd.

Country or region before: Singapore