CN113485834A - Shared memory management method and device, computer equipment and storage medium - Google Patents

Shared memory management method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113485834A
CN113485834A CN202110782884.0A CN202110782884A CN113485834A CN 113485834 A CN113485834 A CN 113485834A CN 202110782884 A CN202110782884 A CN 202110782884A CN 113485834 A CN113485834 A CN 113485834A
Authority
CN
China
Prior art keywords
memory
pool
memory block
shared memory
network card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110782884.0A
Other languages
Chinese (zh)
Inventor
张卫
赵楠
何志东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Archforce Financial Technology Co Ltd
Original Assignee
Shenzhen Archforce Financial Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Archforce Financial Technology Co Ltd filed Critical Shenzhen Archforce Financial Technology Co Ltd
Priority to CN202110782884.0A priority Critical patent/CN113485834A/en
Publication of CN113485834A publication Critical patent/CN113485834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multi Processors (AREA)

Abstract

The application relates to a shared memory management method, a device, computer equipment and a storage medium. The method comprises the following steps: applying for a memory through a first interface of an application calling operating system, and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than a memory required by one-time service processing; sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending the application message on the memory block to a data receiver; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool. By adopting the method, the process execution efficiency can be improved to a certain extent.

Description

Shared memory management method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of memory management technologies, and in particular, to a shared memory management method and apparatus, a computer device, and a storage medium.
Background
With the development of memory management technology, shared memory management technology has emerged in order to implement memory sharing between multiprocesses and multithreading, improve data transmission efficiency, and reduce memory occupation.
In the conventional technology, data is sent to a target module or application through a common inter-thread communication or inter-process communication mechanism, system call exists in the whole process, and the copying of a memory cannot be avoided; the negative impact of memory copy is mainly the extra overhead and time delay brought by the memory application and release and the copy process. Specifically, when an application applies for a memory to a system, the memory can be obtained by calling a general function interface which manages a certain memory, and then data is copied to the obtained memory. And then the obtained data in the memory is sent out through the kernel.
However, in the conventional method, the memory managed by the generic function interface is not large, and therefore, the memory is often required to be obtained through system call, and the system call is also required in the process of sending data by the kernel. Because the memory managed by the universal function interface is smaller, the frequency of applying for the memory is more frequent, and the corresponding process of sending data by the kernel is more frequent. The frequent application of the memory and the kernel to send data leads to the corresponding frequent system call, thereby bringing extra overhead and time delay to the system and reducing the execution efficiency of the process to a certain extent.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for managing a shared memory, which can improve the process execution efficiency to some extent.
A method of shared memory management, the method comprising:
applying for a memory through a first interface of an application calling operating system, and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than a memory required by one-time service processing;
sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending the application message on the memory block to a data receiver; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
In one embodiment, the method further includes sending an application message on the memory block to a data receiving side, where the method further includes:
and forwarding the offset address of the memory block in the shared memory pool to a message grounding subprocess for grounding, so that the memory block is locally backed up.
In one embodiment, forwarding the offset address of the memory block in the shared memory pool to a message grounding subprocess for grounding includes:
acquiring a virtual address of the memory block mapped to the message landing subprocess;
acquiring an offset address of the memory block in a shared memory pool, which is sent by a sending interface of the application calling operating system;
according to the virtual address and the offset address, enabling the message grounding subprocess to obtain the memory address of the memory block;
and according to the memory address, the message landing subprocess lands on the ground.
In one embodiment, the method further comprises:
defining a structure body of a shared memory pool;
establishing a shared memory pool through a memory sharing function;
defining a pointer pointing to the shared memory structure body to obtain a defined pointer;
and mapping the created shared memory pool to the definition pointer through a file mapping function to obtain the constructed shared memory pool.
In one embodiment, the method further comprises:
the network card receives an external message and writes the external message into the memory block;
when the network card finishes writing the memory block, submitting the memory block to the application for data reading and processing;
and the shared memory pool is registered with the network card, so that the network card has the write permission of the shared memory pool.
In one embodiment, the method further comprises:
reading the memory allowance of the shared memory pool into a register corresponding to the thread in real time, and checking whether the memory allowance of the shared memory pool meets the memory required by the thread operation;
when the memory allowance meets the memory required by the thread operation, distributing the required memory for the thread from the shared memory pool, acquiring a comparison and exchange instruction, and comparing the memory allowance with the memory variable in the register corresponding to the thread according to the comparison and exchange instruction;
and when the memory allowance is equal to the memory variable in the register corresponding to the thread, subtracting the required memory from the memory variable to obtain a new memory variable, and storing the new memory variable into the register corresponding to the thread.
In one embodiment, one process includes a first thread and a second thread, where after the first thread finishes executing the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time, the second thread executes the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time.
A shared memory management apparatus, the apparatus comprising:
the memory application module is used for applying for a memory through a first interface of an application calling operating system and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than the memory required by one-time service processing;
an application message sending module, configured to send the address of the memory block to a network card, so that the network card directly reads the memory block according to the address through a direct memory access interface, and sends an application message on the memory block to a data receiving party; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
applying for a memory through a first interface of an application calling operating system, and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than a memory required by one-time service processing;
sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending the application message on the memory block to a data receiver;
and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
applying for a memory through a first interface of an application calling operating system, and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than a memory required by one-time service processing;
sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending the application message on the memory block to a data receiver;
and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
According to the shared memory management method, the shared memory management device, the computer equipment and the storage medium, the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool. The application calls a first interface of an operating system to apply for a memory, and obtains a memory block with a specified size from a constructed shared memory pool, and when the memory block is specified, the memory which is at least not lower than one service processing requirement can be set, namely the memory block with the specified size is a memory block with a very large memory. The application may construct an application message on the memory block; and enabling the network card to directly read the memory block through the direct memory access interface according to the address of the memory block. The process does not need to drive the memory to be sent to the network card through the central processing unit. And then the network card sends the application message to a data receiving party. By applying for a large memory, the interface of the operating system does not need to be called for many times, and by setting the mode of reading the memory by the network card, the memory copy of each process does not need to be carried out back and forth. Therefore, the method and the device can improve the process execution efficiency to a certain extent.
Drawings
FIG. 1 is a diagram of an application environment of a shared memory management method according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for shared memory management according to an embodiment;
FIG. 3 is a block diagram of a shared memory management device according to an embodiment;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The shared memory management method provided by the application can be applied to the application environment shown in fig. 1. The data sending end 102 writes application messages into a memory block through application, the memory block is located on the server 104, and the data receiving end 106 is used for receiving the application messages sent by the server 104. The server 104 may be implemented by an independent server 104 or a server 104 cluster composed of a plurality of servers 104, where the application runs on the server 104, the operating system runs in a cpu, and the cpu runs in the server 104. And calling an interface of an operating system in the cpu by an application to apply for a memory, and obtaining memory blocks with certain sizes. Different from the traditional data sending process, the network card can directly read the application message on the memory block and send the application message to the data receiver through the registration network card and the direct memory access interface.
In an embodiment, as shown in fig. 2, a method for managing a shared memory is provided, which is described by taking the method applied to the terminal in fig. 1 as an example, and includes the following steps:
step 202, calling a first interface of an operating system by an application to apply for a memory, and obtaining a memory block with a specified size from a constructed shared memory pool, where the memory block is used to construct an application message, and the memory block is not lower than a memory required by one service processing.
The application is a program on the server, and is a process instance running on the server. In the present application, all software programs run in the cpu, and the kernel is an operating system and also runs in the cpu. A server runs a plurality of kernels, and the number of the kernels on the server is not consistent with the number of the cpus, because the configuration of one cpu can be 4 cores or 8 cores. The constructed shared memory pool is a large memory block, and a plurality of small memory blocks can be divided in the large memory block. The memory block of the specified size belongs to one memory block in the shared memory pool.
The first interface of the operating system is an interface through which an application calls the operating system to apply for a memory from the background server, and the application can apply for a larger memory by calling the first interface of the operating system, so that the obtained memory is a memory block with a specified size, wherein the specified size is set according to business needs during initialization, and for example, the specified size may be 1G. Because a large memory is applied during initialization, the memory is always occupied by the memory management structure in the running process and is not released to an operating system under general conditions. Therefore, the system call is not triggered at high frequency, and the application performance (time delay) is not influenced.
After obtaining the memory block with the specified size, the data sender may construct, i.e., write an application message, on the memory block by using the application, which is a "write" process for writing data to the memory block. The application message is a message input by a user through application, for example, a user buying and selling stocks in the financial industry, including a user buying and selling stocks and a data service processing provider for carrying out statistics on the stock buying and selling condition of the user, and the data service processing provider can record, analyze and calculate the user and the current transaction behavior of the user once each time the user carries out a transaction. Here, the analysis data generated for the user and the current transaction behavior of the user is a message input by an application, and the actual application scenario is more complicated, so the message input by the application should not be limited to the illustrated one.
Step 204, sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending an application message on the memory block to a data receiving party; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
The dma (direct Memory access) is a hardware device, and the network card can directly read and send the application message through the hardware device.
Specifically, when the application triggers the network card to perform DMA reading, the server sends the address of the memory block to the network card, so that the network card directly reads the application message through the direct memory access interface according to the address of the memory block, and sends the application message out of the server to the data receiver.
When the shared memory pool is created, the memory of the shared memory pool is registered to the network card, so that the network card has the read permission of all the memories in the whole shared memory pool. Since all memories in the shared memory pool are registered to the network card, the memory blocks with the specified size are also registered to the network card along with all memories, and the network card has the right to read the memory blocks with the specified size, so that the memory blocks do not need to be additionally registered to the network card.
When the network card has the read authority of the memory block, the network card does not need to be driven by a cpu any more when sending data, reads the memory block by calling a direct memory access interface of an operating system, and sends the memory block to the network card connected with a memory bus on a server, so that the data can be sent to a data receiver.
In conclusion, on one hand, the memory block with a large enough memory is applied, so that the number of times of calling the operating system is reduced fundamentally; on the other hand, the memory is registered to the network card, so that when data is sent, the network card can send the data without driving the network card by the cpu. Through the improvement on both the aspect of applying for the memory and the aspect of sending data, the execution efficiency of the process can be improved to a certain extent.
In the shared memory management method, the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool. The application calls a first interface of an operating system to apply for a memory, and obtains a memory block with a specified size from a constructed shared memory pool, and when the memory block is specified, the memory which is at least not lower than one service processing requirement can be set, namely the memory block with the specified size is a memory block with a very large memory. The application may construct an application message on the memory block; and the network card is enabled to directly read the memory block through the direct memory access interface. The process does not need to drive the memory to be sent to the network card through the central processing unit. And then the network card sends the application message to a data receiving party. By applying for a large memory, the interface of the operating system does not need to be called for many times, and by setting the mode of reading the memory by the network card, the memory copy of each process does not need to be carried out back and forth. Therefore, the method and the device can improve the process execution efficiency to a certain extent.
In one embodiment, the method further includes sending an application message on the memory block to a data receiving side, where the method further includes:
and forwarding the offset address of the memory block in the shared memory pool to a message grounding subprocess for grounding, so that the memory block is locally backed up.
The offset address of the memory block in the shared memory pool is obtained by mapping the memory block to the virtual address of the message sending host process and the memory address of the memory block, and the message sending host process is a process in which the network card sends out the application message.
The process is a program running on the cpu, the message landing subprocess is a section of program on the cpu, and the message landing subprocess is a process for locally backing up application messages in the message sending main process. The message landing subprocess landing means that the related operation of the message landing subprogram is executed completely, and the application message in the memory block is equivalent to local backup once. After the message landing subprocess lands on the ground, the memory block can be accessed and read locally and corresponding data processing can be carried out according to business requirements.
Specifically, the application calls a sending interface of the operating system to send the offset address of the memory block in the shared memory pool to the message-based sub-process, and the memory block can be accessed locally to be read and processed.
In this embodiment, the offset address of the memory block in the shared memory pool is forwarded to the message grounding subprocess for grounding through a sending interface of an application call operating system, so that the memory block is locally backed up. The message-floor subprocess can also access the memory block and read and process the data.
In one embodiment, forwarding the offset address of the memory block in the shared memory pool to a message grounding subprocess for grounding includes: acquiring a virtual address of the memory block mapped to the message landing subprocess; acquiring an offset address of the memory block in a shared memory pool, which is sent by a sending interface of the application calling operating system; according to the virtual address and the offset address, enabling the message grounding subprocess to obtain the memory address of the memory block; and according to the memory address, the message landing subprocess lands on the ground.
The offset address is an address indicating an offset relationship between a virtual address and a memory address. The virtual address is an address obtained by mapping the memory block to each process space, and the memory block itself has a real physical address, i.e., a memory address, which is not changed by each process. In different processes, the virtual addresses of the memory blocks with the specified size mapped into the process spaces are different, and the memory addresses of the memory blocks in the processes are the same.
And the application calls a sending interface of the operating system to send the offset address of the memory block with the specified size in the shared memory pool to the message grounding subprocess for grounding. Specifically, the process of sending the application message to the data receiving party by the network card is regarded as a complete first process, and the message landing sub-process is regarded as a second process. The memory block may be mapped into the first process and the second process, where when the memory block is mapped into the first process, the virtual address of the memory block is a first virtual address, and when the memory block is mapped into the second process, the virtual address of the memory block is a second virtual address (the memory block is mapped to the virtual address of the message landform subprocess).
In the first process, a first offset address can be known according to the memory address and the virtual address of the memory block, the first process sends the first offset address to a second process, and the second process can determine the memory address of the memory block according to the first offset address and the second virtual address. It should be noted that the first process is a process that is successfully connected to the memory management structure, and the second process is a process that is not successfully connected to the memory management structure. Therefore, the first process can see the memory address of the memory block, the second process cannot see the memory address of the memory block, and the second process can see only the second virtual address of the memory block and cannot see the memory address of the memory block.
Further, the first process is not limited to a process in which the network card sends the application message to the data receiving party, and may be another process; likewise, the second process is not limited to a message-floor child process, and may be another process.
In this embodiment, only the virtual address of the memory block mapped to the message grounding subprocess and the offset address of the memory block in the shared memory pool sent by the sending interface of the application call operating system need to be obtained, so that the message grounding subprocess can know the memory address of the memory block, and then access the memory block according to the memory address of the memory block without copying the memory block. Compared with the memory copy which requires at least several megabytes, the forwarding offset address only requires about several bytes of memory, and the execution efficiency of the process is further improved to a certain extent.
In one embodiment, the shared memory management method further includes: defining a structure body of a shared memory pool; establishing a shared memory pool through a memory sharing function; defining a pointer pointing to the shared memory structure body to obtain a defined pointer; and mapping the created shared memory pool to the definition pointer through a file mapping function to obtain the constructed shared memory pool.
The structure body combines different types of data into an organic whole, the memory sharing function can be a CreateFileMapping function, and a shared memory pool is created through the function. After the structure is defined, a pointer pShareMem pointing to the shared memory structure gets a defined pointer, which can be used for space allocation and space release and initialization. And finally, mapping the created shared memory pool to a definition pointer through a file mapping function MapViewOfFile, so that the constructed shared memory pool can be obtained.
In this embodiment, a structure of the shared memory pool is defined, the shared memory pool is created through a memory sharing function, a pointer pointing to the shared memory structure is defined, a defined pointer is obtained, and the created shared memory pool is mapped to the defined pointer through a file mapping function, so that a constructed shared memory pool is obtained.
In one embodiment, the shared memory management method further includes: the network card receives an external message and writes the external message into the memory block; when the network card finishes writing the memory block, submitting the memory block to the application for data reading and processing; and the shared memory pool is registered with the network card, so that the network card has the write permission of the shared memory pool.
The shared memory pool comprises a sending shared pool and a receiving shared pool, the shared memory pool is a large memory, and a plurality of small memories can be divided in the large memory. In order to manage the use of the shared memory pool, a memory management structure for managing the shared memory pool is also created. The memory management structure is a program for managing and registering the memory state of the shared memory pool. For example, when multiple threads run and each thread requests memory from the shared memory pool, the memory management structure displays the current use state of the shared memory pool to each thread, allocates the memory of each thread by the memory management structure, and timely changes and registers the memory use condition of the shared memory pool after allocation is completed. Because the memory management structure does not relate to the disassembling and merging logic of the complex memory blocks, the multithreading application release efficiency can be higher.
The external message is data input in the service processing process, and the message received by the network card not only has the application message of the application, but also comprises the external message. After receiving the external message, the network card writes the external message into the memory block, which is the previously indicated memory block with the specified size. The network card informs the application when acquiring a new external message, so that the application can know that the external message is written in time. When the memory block is completely written and the network card completes writing the memory block, the memory block is submitted to the application, and the application can perform corresponding data reading and processing on the data read from the memory block according to the service processing requirement.
In this embodiment, the network card has the write permission of the shared memory pool, so that when the network card receives an external message, the external message can be written into the memory block, and the memory block is submitted to the application when the external message is written, so that the application can read and process data of the memory block without calling an interface of an operating system for many times, and by setting a mode of writing the memory by the network card, memory copying of each process is not required to be performed back and forth.
In one embodiment, the shared memory management method further includes: reading the memory allowance of the shared memory pool into a register corresponding to the thread in real time, and checking whether the memory allowance of the shared memory pool meets the memory required by the thread operation; when the memory allowance meets the memory required by the thread operation, distributing the required memory for the thread from the shared memory pool, acquiring a comparison and exchange instruction, and comparing the memory allowance with the memory variable in the register corresponding to the thread according to the comparison and exchange instruction; and when the memory allowance is equal to the memory variable in the register corresponding to the thread, subtracting the required memory from the memory variable to obtain a new memory variable, and storing the new memory variable into the register corresponding to the thread.
The cpu instruction herein refers to a compare and swap instruction, and is an instruction provided by the cpu for an atomic operation. A process corresponds to a plurality of threads, the process is a program running on a CPU, a register is located in the CPU, and the register stores the memory allowance of the shared memory pool. The memory margin is the remaining memory on the shared memory pool issued by the memory management structure. Assuming that the shared memory pool is 1G during initialization, the shared memory pool releases the memory, even if 1G is continuously reduced. The shared memory pool releases the memory, and there is a corresponding user who needs to obtain the memory, and the user can be each thread. When the thread local queue of each thread cannot meet the requirement of the thread, each thread applies for the memory from the shared memory pool, so that the memory margin of the shared memory pool is continuously reduced.
The memory variables stored in the registers in the cpus can be changed by the cpus, and the changing process comprises the following steps:
and checking whether the memory allowance of the shared memory pool meets the memory required by the thread operation, when the memory allowance meets the memory required by the thread operation, distributing the required memory for the thread from the shared memory pool, acquiring a comparison and exchange instruction, and comparing the memory variables in the register corresponding to the thread according to the comparison and exchange instruction. When the memory margin is not satisfied, i.e. is less than the memory required by the thread running, at this time, the system will stop at the current step and prompt an error report, etc. And when the memory allowance is equal to the memory variable in the register corresponding to the thread, subtracting the required memory from the memory variable to obtain a new memory variable, and storing the new memory variable into the register corresponding to the thread. For example, when the thread a needs 10M memory, the thread a applies for 10M memory from the shared memory pool through the memory management structure, the memory management structure compares the memory margin of the shared memory pool with the applied 10M, and if the memory margin is greater than 10M, it indicates that the memory margin can meet the memory required by the current thread operation. At this point, the memory management structure divides 10M of memory from the shared memory pool to the thread, and accordingly, the memory of the shared memory pool is reduced by 10M. Meanwhile, a comparison and exchange instruction is obtained, the memory allowance is compared with the memory variable corresponding to the register, when the memory allowance and the memory variable are equal, the memory variable in the register corresponding to the thread is modified, the divided required memory is subtracted from the memory variable to obtain a new memory variable, and then the new memory variable is stored in the register corresponding to the thread. For example: the memory margin of the shared memory pool is assumed to be 100M originally, and now only 90M of memory remains because thread a applies for 10M of memory to be dropped out. Before a 10M memory is not obtained, the memory variable read from the register corresponding to the thread A is 100M, after the 10M memory is obtained, since the thread A before division is 100M and the memory variable in the register is 100M, the two values are the same, and the memory variable stored in the register corresponding to the thread A is modified into 90M according to the comparison and exchange instruction.
In this embodiment, when a thread applies for a memory from a shared memory pool, it is first checked whether the memory margin of the shared memory pool meets the memory required for its operation. When the memory required by the thread is met, the required memory is allocated to the thread from the shared memory pool. The memory of the shared memory pool is released and the memory of the thread is increased. And comparing the memory allowance with the memory variable in the register corresponding to the thread according to the comparison and exchange instruction, modifying the memory variable to obtain a new memory variable when the memory allowance is equal to the memory variable in the register corresponding to the thread, and storing the new memory variable into the register corresponding to the thread so as to update the memory variable stored in the register in time.
In one embodiment, one process includes a first thread and a second thread, wherein after the first thread finishes executing the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time, the second thread executes the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time.
The first thread and the second thread are both threads and are different threads.
Specifically, the steps executed by the threads in sequence are the same, and using a lock-free algorithm, the threads execute the first step: the memory allowance of the shared memory pool is read into the register corresponding to the thread in real time, time difference exists in the step, and due to the time difference existing in the first step, time difference correspondingly exists in the following steps, so that the memory applying efficiency of multithreading can be improved.
In this embodiment, after the first thread finishes executing the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time, the second thread executes the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time, so that the threads are parallel as much as possible on the basis of avoiding thread conflicts, and the efficiency of applying for the memory by multiple threads is improved.
It should be understood that, although the steps in the flowcharts related to the above embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in each flowchart related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 3, there is provided a shared memory management apparatus, including: a memory application module 301 and an application message sending module 302, wherein:
a memory application module 301, configured to apply for a memory through a first interface of an application call operating system, and obtain a memory block of a specified size from a constructed shared memory pool, where the memory block is used to construct an application message, and the memory block is not lower than a memory required by a service processing;
an application message sending module 302, configured to send the address of the memory block to a network card, so that the network card directly reads the memory block according to the address through a direct memory access interface, and sends an application message on the memory block to a data receiving party; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
In one embodiment, the shared memory management apparatus further includes: and the local backup module is used for forwarding the offset address of the memory block in the shared memory pool to the message grounding subprocess for grounding so as to locally backup the memory block.
In one embodiment, a local backup module includes: the system comprises a virtual address acquisition module, an offset address acquisition module, a memory address acquisition module and a subprocess landing module, wherein:
a virtual address obtaining module, configured to obtain a virtual address where the memory block is mapped to a message floor subprocess;
an offset address obtaining module, configured to obtain an offset address, in the shared memory pool, of the memory block sent by the sending interface of the application-invoked operating system;
a memory address obtaining module, configured to obtain, by the message grounding subprocess, a memory address of the memory block according to the virtual address and the offset address;
and the subprocess landing module is used for landing the subprocess according to the memory address.
In one embodiment, the shared memory management apparatus further includes: the system comprises a structure body defining module, a shared memory pool creating module, a definition pointer acquiring module and a shared memory pool acquiring module, wherein:
the structure body definition module is used for defining the structure body of the shared memory pool;
the shared memory pool creating module is used for creating a shared memory pool through a memory sharing function;
the definition pointer acquisition module is used for defining a pointer pointing to the shared memory structure body to obtain a definition pointer;
and the shared memory pool acquisition module is used for mapping the created shared memory pool to the definition pointer through a file mapping function to obtain the constructed shared memory pool.
In one embodiment, the shared memory management apparatus further includes: the external message writing module and the memory block submitting module are provided, wherein:
an external message writing module, configured to receive an external message by the network card, and write the external message into the memory block;
a memory block submitting module, configured to submit the memory block to the application for data reading and processing when the network card completes writing the memory block;
and the shared memory pool is registered with the network card, so that the network card has the write permission of the shared memory pool.
In one embodiment, the shared memory management apparatus further includes: the device comprises a memory allowance reading module, a memory comparison module and a memory variable storage module, wherein:
the memory allowance reading module is used for reading the memory allowance of the shared memory pool into a register corresponding to the thread in real time and verifying whether the memory allowance of the shared memory pool meets the memory required by the thread operation;
the memory comparison module is used for distributing the required memory for the thread from the shared memory pool when the memory allowance meets the memory required by the thread operation, acquiring a comparison and exchange instruction, and comparing the memory allowance with the memory variable in the register corresponding to the thread according to the comparison and exchange instruction;
and the memory variable storage module is used for subtracting the required memory from the memory variable to obtain a new memory variable when the memory allowance is equal to the memory variable in the register corresponding to the thread, and storing the new memory variable into the register corresponding to the thread.
For specific limitations of the shared memory management apparatus, reference may be made to the above limitations of the shared memory management method, which is not described herein again. All or part of the modules in the shared memory management device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing memory management data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a shared memory management method.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for managing shared memory, the method comprising:
applying for a memory through a first interface of an application calling operating system, and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than a memory required by one-time service processing;
sending the address of the memory block to a network card, enabling the network card to directly read the memory block according to the address through a direct memory access interface, and sending the application message on the memory block to a data receiver;
and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
2. The method according to claim 1, wherein an application message on the memory block is sent to a data receiving side, and the method further comprises:
and forwarding the offset address of the memory block in the shared memory pool to a message grounding subprocess for grounding, so that the memory block is locally backed up.
3. The method according to claim 2, wherein forwarding the offset address of the memory block in the shared memory pool to a message landing subprocess for landing comprises:
acquiring a virtual address of the memory block mapped to the message landing subprocess;
acquiring an offset address of the memory block in a shared memory pool, which is sent by a sending interface of the application calling operating system;
according to the virtual address and the offset address, enabling the message grounding subprocess to obtain the memory address of the memory block;
and according to the memory address, the message landing subprocess lands on the ground.
4. The method of claim 1, further comprising:
defining a structure body of a shared memory pool;
establishing a shared memory pool through a memory sharing function;
defining a pointer pointing to the shared memory structure body to obtain a defined pointer;
and mapping the created shared memory pool to the definition pointer through a file mapping function to obtain the constructed shared memory pool.
5. The method of claim 1, further comprising:
the network card receives an external message and writes the external message into the memory block;
when the network card finishes writing the memory block, submitting the memory block to the application for data reading and processing;
and the shared memory pool is registered with the network card, so that the network card has the write permission of the shared memory pool.
6. The method of claim 1, further comprising:
reading the memory allowance of the shared memory pool into a register corresponding to the thread in real time, and checking whether the memory allowance of the shared memory pool meets the memory required by the thread operation;
when the memory allowance meets the memory required by the thread operation, distributing the required memory for the thread from the shared memory pool, acquiring a comparison and exchange instruction, and comparing the memory allowance with the memory variable in the register corresponding to the thread according to the comparison and exchange instruction;
and when the memory allowance is equal to the memory variable in the register corresponding to the thread, subtracting the required memory from the memory variable to obtain a new memory variable, and storing the new memory variable into the register corresponding to the thread.
7. The method as claimed in claim 6, wherein one process comprises a first thread and a second thread, wherein after the first thread finishes reading the memory margin of the shared memory pool into the register corresponding to the thread in real time, the second thread performs the step of reading the memory margin of the shared memory pool into the register corresponding to the thread in real time.
8. A shared memory management apparatus, the apparatus comprising:
the memory application module is used for applying for a memory through a first interface of an application calling operating system and obtaining a memory block with a specified size from a constructed shared memory pool, wherein the memory block is used for constructing an application message, and the memory block is not lower than the memory required by one-time service processing;
an application message sending module, configured to send the address of the memory block to a network card, so that the network card directly reads the memory block according to the address through a direct memory access interface, and sends an application message on the memory block to a data receiving party; and the shared memory pool is registered with the network card, so that the network card has the read permission of the shared memory pool.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110782884.0A 2021-07-12 2021-07-12 Shared memory management method and device, computer equipment and storage medium Pending CN113485834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110782884.0A CN113485834A (en) 2021-07-12 2021-07-12 Shared memory management method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782884.0A CN113485834A (en) 2021-07-12 2021-07-12 Shared memory management method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113485834A true CN113485834A (en) 2021-10-08

Family

ID=77937981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782884.0A Pending CN113485834A (en) 2021-07-12 2021-07-12 Shared memory management method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113485834A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778695A (en) * 2021-11-12 2021-12-10 深圳华锐金融技术股份有限公司 Memory snapshot management method, device, equipment and medium crossing application life cycle
CN115586980A (en) * 2022-10-09 2023-01-10 维塔科技(北京)有限公司 Remote procedure calling device and method
CN116662037A (en) * 2023-07-24 2023-08-29 杭州鉴智机器人科技有限公司 Processing method and device for shared memory, electronic equipment and storage medium
CN117033298A (en) * 2022-10-21 2023-11-10 上海天数智芯半导体有限公司 Tile processor, SOC chip and electronic equipment
CN117493025A (en) * 2023-12-29 2024-02-02 腾讯科技(深圳)有限公司 Resource allocation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000038076A (en) * 1998-12-03 2000-07-05 정선종 Method for zero-copy message passing
CN102291298A (en) * 2011-08-05 2011-12-21 曾小荟 Efficient computer network communication method oriented to long message
CN104753814A (en) * 2013-12-31 2015-07-01 国家计算机网络与信息安全管理中心 Packet dispersion method based on network adapter
CN110704214A (en) * 2019-10-14 2020-01-17 北京京东乾石科技有限公司 Inter-process communication method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000038076A (en) * 1998-12-03 2000-07-05 정선종 Method for zero-copy message passing
CN102291298A (en) * 2011-08-05 2011-12-21 曾小荟 Efficient computer network communication method oriented to long message
CN104753814A (en) * 2013-12-31 2015-07-01 国家计算机网络与信息安全管理中心 Packet dispersion method based on network adapter
CN110704214A (en) * 2019-10-14 2020-01-17 北京京东乾石科技有限公司 Inter-process communication method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778695A (en) * 2021-11-12 2021-12-10 深圳华锐金融技术股份有限公司 Memory snapshot management method, device, equipment and medium crossing application life cycle
CN113778695B (en) * 2021-11-12 2022-04-29 深圳华锐分布式技术股份有限公司 Memory snapshot management method, device, equipment and medium crossing application life cycle
CN115586980A (en) * 2022-10-09 2023-01-10 维塔科技(北京)有限公司 Remote procedure calling device and method
CN117033298A (en) * 2022-10-21 2023-11-10 上海天数智芯半导体有限公司 Tile processor, SOC chip and electronic equipment
CN116662037A (en) * 2023-07-24 2023-08-29 杭州鉴智机器人科技有限公司 Processing method and device for shared memory, electronic equipment and storage medium
CN116662037B (en) * 2023-07-24 2023-10-20 杭州鉴智机器人科技有限公司 Processing method and device for shared memory, electronic equipment and storage medium
CN117493025A (en) * 2023-12-29 2024-02-02 腾讯科技(深圳)有限公司 Resource allocation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113485834A (en) Shared memory management method and device, computer equipment and storage medium
US10762593B2 (en) Unified memory systems and methods
US6701464B2 (en) Method and system for reporting error logs within a logical partition environment
US10248175B2 (en) Off-line affinity-aware parallel zeroing of memory in non-uniform memory access (NUMA) servers
US9158572B1 (en) Method to automatically redirect SRB routines to a zIIP eligible enclave
US8700864B2 (en) Self-disabling working set cache
US20240095174A1 (en) Method for detecting error of operating system kernel memory in real time
CN114253713B (en) Asynchronous batch processing method and system based on reactor
US9477518B1 (en) Method to automatically redirect SRB routines to a zIIP eligible enclave
US20080294864A1 (en) Memory class based heap partitioning
US12045464B2 (en) Data read method, data write method, device, and system
US10417121B1 (en) Monitoring memory usage in computing devices
US11074200B2 (en) Use-after-free exploit prevention architecture
CN113961366A (en) Kernel function calling method of operating system and computer equipment
US20170251082A1 (en) Dynamic cache-efficient event suppression for network function virtualization
US20030140069A1 (en) System and method for using ancillary processors and storage to speed time critical data capture
US11914512B2 (en) Writeback overhead reduction for workloads
US20130262790A1 (en) Method, computer program and device for managing memory access in a multiprocessor architecture of numa type
CN117311833B (en) Storage control method and device, electronic equipment and readable storage medium
US20210373975A1 (en) Workgroup synchronization and processing
US11385927B2 (en) Interrupt servicing in userspace
JP2830293B2 (en) Program execution method
CN114691291A (en) Data processing method, device, computing equipment and medium
WO2023196118A1 (en) Caching a memory descriptor for plural input/output requests
CN117193662A (en) IO processing method, system, terminal and storage medium of snapshot volume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 2301, building 5, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Huarui Distributed Technology Co.,Ltd.

Address before: Room 2301, building 5, Shenzhen new generation industrial park, 136 Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN ARCHFORCE FINANCIAL TECHNOLOGY Co.,Ltd.