CN114153615A - Memory management method, device, equipment, computer program and storage medium - Google Patents

Memory management method, device, equipment, computer program and storage medium Download PDF

Info

Publication number
CN114153615A
CN114153615A CN202111499167.3A CN202111499167A CN114153615A CN 114153615 A CN114153615 A CN 114153615A CN 202111499167 A CN202111499167 A CN 202111499167A CN 114153615 A CN114153615 A CN 114153615A
Authority
CN
China
Prior art keywords
memory
frame
current frame
memory block
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111499167.3A
Other languages
Chinese (zh)
Inventor
孙凌峰
万园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111499167.3A priority Critical patent/CN114153615A/en
Publication of CN114153615A publication Critical patent/CN114153615A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

The application provides a memory management method, a memory management device, memory management equipment, a computer program and a storage medium. The method comprises the following steps: determining a current frame memory distributor corresponding to the current frame from preset M frame memory distributors according to a frame calculation task corresponding to the current frame; m frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2; acquiring a target memory block corresponding to a current frame from M memory block lists of a preset memory pool through a current frame memory distributor; m memory block lists correspond to M frame memory distributors one by one; address spaces among the M memory block lists are not coincident; and acquiring memory resources from the target memory block and distributing the memory resources to the frame calculation task. By the method and the device, the efficiency of program operation and the stability of memory management can be improved.

Description

Memory management method, device, equipment, computer program and storage medium
Technical Field
The present application relates to computer technologies for frame-based computing, and in particular, to a memory management method, apparatus, device, computer program, and storage medium.
Background
At present, when performing memory management on an interactive rendering application or service related to frame calculation, such as a game or a video interactive application, memory allocation is performed linearly for a calculation program of each frame in a memory block list through a linear memory allocator, and after a frame is calculated, an allocation starting point of the memory block list is reset, and memory allocation of a next frame is continued. However, when the memory allocation method of the related art is applied to multi-frame parallel computing, since other frames may still be in the running computing when one frame of computing is finished, at this time, resetting the allocation starting point and continuously allocating the memory may damage the running memory environment of other frames, the memory allocation method of the related art cannot support multi-frame parallel computing, thereby reducing the efficiency of program running and increasing the risk of instability of memory management.
Disclosure of Invention
Embodiments of the present application provide a memory management method, apparatus, device, computer program, and storage medium, which can improve stability of memory management and efficiency of program operation.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a memory management method, including:
determining a current frame memory distributor corresponding to a current frame from preset M frame memory distributors according to a memory request initiated by a frame calculation task of the current frame; the M frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2;
acquiring a target memory block corresponding to the current frame from M memory block lists of a preset memory pool through the current frame memory distributor; the M memory block lists are in one-to-one correspondence with the M frame memory allocators; address spaces among the M memory block lists are not coincident;
and acquiring memory resources from the target memory block and distributing the memory resources to the frame calculation task.
An embodiment of the present application provides a memory management device, including:
the distributor determining module is used for determining a current frame memory distributor corresponding to a current frame from preset M frame memory distributors according to a memory request initiated by a frame calculation task of the current frame; the M frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2;
a memory obtaining module, configured to obtain, by the current frame memory distributor, a target memory block corresponding to the current frame from M memory block lists in a preset memory pool; the M memory block lists are in one-to-one correspondence with the M frame memory allocators; address spaces among the M memory block lists are not coincident;
and the memory allocation module is used for acquiring memory resources from the target memory block and allocating the memory resources to the frame calculation task.
In the above apparatus, each frame memory allocator of the M frame memory allocators comprises: at least one sub memory allocator corresponding to the at least one frame state; the memory obtaining module is further configured to determine a target sub memory distributor from at least one sub memory distributor corresponding to the current frame memory distributor according to the current frame state of the current frame; the current frame state belongs to the at least one frame state; the current frame state represents the service type of the frame calculation task; and acquiring the target memory block from the preset memory pool through the target sub-memory distributor.
In the above apparatus, each of the at least one sub memory allocator includes a sub allocator for a number of threads; each sub-distributor corresponds to each thread of the frame calculation task one by one; the number of the threads is the number of threads contained in the frame calculation task; the memory allocation module is further configured to synchronously acquire, through each of the target sub-memory allocators, a memory resource corresponding to each thread in the frame computation task from the target memory block, and allocate the memory resource to each thread correspondingly, so as to complete memory allocation for the frame computation task.
In the above apparatus, the memory obtaining module is further configured to determine, by the current frame memory distributor, whether a current frame state of the current frame matches a memory life cycle applied by the memory request before obtaining a target memory block corresponding to the current frame from M memory block lists in a preset memory pool; and under the condition that the frame state is not matched with the memory life cycle, not executing memory allocation and carrying out error prompt.
In the above apparatus, an address space of the M memory block lists is a virtual address space; the memory obtaining module is further configured to determine, according to the amount of memory resources applied by the memory request, whether an idle memory block in the memory block list corresponding to the current frame satisfies the amount of memory resources; under the condition of not being satisfied, applying for a virtual memory from an operating system of the electronic equipment through the preset memory pool, and adding an idle memory block in the memory block list corresponding to the current frame according to the applied virtual memory; if the idle memory block is satisfied, taking the idle memory block as the target memory block; the memory allocation module is further configured to submit the virtual address in the target memory block to the operating system, so that the operating system allocates a memory resource of a physical address to the frame computation task according to the virtual address.
In the above apparatus, the memory allocation module is further configured to, after acquiring the memory resource from the target memory block and allocating the memory resource to the frame calculation task, update the current frame state through the current frame memory allocator and notify the target sub-memory allocator of memory allocation information resetting when a current frame state end instruction is received; resetting memory allocation information of the memory block list corresponding to the current frame through the target sub-memory allocator, and performing address invalidation processing on the allocated virtual address corresponding to the frame calculation task; and under the condition that the current frame state representation completes the complete frame calculation, returning the virtual address subjected to address invalidation processing to the preset memory pool.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the memory management method provided by the embodiment of the application when executing the executable instructions stored in the memory.
An embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the memory management method provided in the embodiment of the present application.
The present application provides a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the memory management method provided in the present application is implemented.
The embodiment of the application has the following beneficial effects:
under the scene of multi-frame parallel computing, the frame memory distributor taking the frame as the unit carries out memory distribution on each frame, so that when the frame computing task of the current frame carries out wrong access on the memory address of the previous frame which is invalid, an operating system can find the problem in time and report the error and position quickly, and the stability of memory management is improved. And the independent frame memory distributor carries out memory distribution in the memory block list with non-coincident address space, thereby ensuring that the memory addresses distributed for different frames are not coincident under the scene of multi-frame parallel computation, providing independent distribution starting points for different frames, reducing the influence of one frame on the running memory environment of other frames when the distribution starting point is reset and the memory is redistributed, realizing the support of multi-frame parallel computation and improving the efficiency of program running.
Drawings
FIG. 1 is a diagram illustrating a memory management structure for linear memory allocation according to the related art;
FIG. 2 is a schematic diagram illustrating a related art memory allocation process;
fig. 3A is an alternative structural diagram of a memory management frame computing system architecture according to an embodiment of the present application;
fig. 3B is an alternative structural diagram of a frame calculation memory management system architecture according to an embodiment of the present disclosure;
FIG. 4 is an alternative schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is an optional flowchart of a memory management method according to an embodiment of the present disclosure;
fig. 6 is an optional flowchart of a memory management method according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating an alternative process of a multi-frame multithreading parallel frame calculation mode according to an embodiment of the present application;
fig. 8 is an alternative flowchart of a memory management method according to an embodiment of the present application;
fig. 9 is an alternative flowchart of a memory management method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an alternative memory management apparatus included in an electronic device according to an embodiment of the present application;
fig. 11 is an optional flowchart of a memory management method according to an embodiment of the present application;
fig. 12 is an alternative flowchart illustrating a memory management method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an alternative memory management apparatus included in an electronic device according to an embodiment of the present application;
fig. 14 is an alternative flow chart illustrating that the memory management method provided in the embodiment of the present application is applied to an actual game scene;
fig. 15 is an optional flowchart illustrating that the memory management method provided in the embodiment of the present application is applied to an actual game scene.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Frame: one image frame is refreshed on one page when the game runs.
2) Game frame (Game): the portion of a frame that handles player interaction, computing game logic.
3) Render frame (Render): a portion of a frame where a game object is rendered on a screen.
4) Frame state: in which computational state the markup frame is, i.e., in the game frame or the rendering frame.
5) A memory distributor: a facility for allocating and managing memory requested from the operating system.
6) Frame memory allocator: and allocating the memory for each frame, uniformly recovering the memory when the frame is ended, and reallocating the memory for the next frame.
7) Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and is a new technology for seamlessly integrating real world information and virtual world information, and the aim of the technology is to sleeve a virtual world on a screen in the real world and perform interaction.
8) Virtual Reality (VR) covers computer, electronic information, simulation technologies, the basic implementation of which is that a computer simulates a Virtual environment to give the human a sense of environmental immersion. Virtual reality technology is a computer simulation system that can create and experience a virtual world, and uses data in real life, and combines electronic signals generated by computer technology with various output devices to convert the computer simulation system into a simulated environment, so that a user can be immersed in the environment. The simulated environment may include images of real objects, and may also include virtual objects represented by three-dimensional models.
9) The metauniverse (Metaverse) is a novel virtual-real fused internet application and social form generated by integrating a plurality of new technologies, provides immersive experience based on an augmented reality technology, generates a mirror image of a real world based on a digital twin technology, builds an economic system based on a block chain technology, closely fuses the virtual world and the real world on an economic system, a social system and an identity system, and allows each user to perform content production and world editing.
10) Cloud gaming (Cloud gaming), also known as game on demand (gaming), is an online gaming technology based on Cloud computing technology. Cloud game technology enables light-end devices (thin clients) with relatively limited graphics processing and data computing capabilities to run high-quality games. In a cloud game scene, a game is not operated in a player game terminal but in a cloud server, and the cloud server renders the game scene into a video and audio stream which is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and capacity of acquiring player input instructions and sending the instructions to the cloud server.
At present, the structure of the distribution system of the frame memory distributor in the related art is shown in fig. 1, and the distributor manages a memory block list and linearly distributes the memory block list to the application object. After the memory is allocated, the allocator shifts the allocation pointer backwards by the corresponding size of the memory, and dynamically applies for a new memory block to the operating system when the capacity of the memory block list is insufficient. When one frame of calculation is finished, the distributor resets the distribution pointer to the starting point of the list head memory block, and the next frame continues to be distributed. The related art allocation flow is shown in fig. 2.
In fig. 2, when the memory allocator receives the memory allocation application for calculating the current frame, the memory allocator determines the allocation starting point according to the current location of the allocation pointer, and checks whether the remaining memory of the memory block where the allocation starting point is located meets the size of the memory in the application. Under the condition that the residual memory meets the applied memory size, allocating the memory at the address of the allocation starting point and offsetting the allocation starting point backwards; under the condition that the residual memories do not meet the size of the applied memories, whether the memory blocks are chained after the memory blocks where the distribution starting points are located is checked, if yes, the memories are distributed, and the distribution starting points are shifted to the starting points of the next memory blocks; otherwise, applying for the memory block link to the tail of the memory block where the distribution starting point is located from the operating system, and then performing memory distribution and distribution starting point offset. And under the condition of receiving a frame ending instruction, releasing the currently allocated memory resources, and resetting the allocation starting point to the head starting point of the memory block chain table.
It can be seen that in the related art, the distributor manages a globally unique memory block list, all threads share the memory, and each distribution operation needs to be locked, so that the multithreading operation efficiency is reduced, and further, the frame calculation efficiency is reduced. And, the distributor manages a globally unique distribution start point, and resets the distribution start point when one frame ends. In multi-frame parallel computing, when one frame ends, other frames may still be in the running computing, and resetting the allocation starting point and continuing to allocate the memory may destroy the running memory environment of other frames. Therefore, the related art scheme cannot support multi-frame parallel computation, needs to wait for the next frame to be computed after the complete computation of one frame is finished, and cannot fully utilize the parallel computation capability of the modern processor, thereby reducing the efficiency of frame computation. Further, in the related art, the memory address allocated to the previous frame is still valid in the next frame, and if the next frame erroneously accesses the memory applied for the previous frame, although the contents in the memory may be invalid, a system error of access conflict does not occur, and it is difficult to find a development error, thereby reducing the program stability.
Embodiments of the present application provide a memory management method, apparatus, device, computer program, and storage medium, which can improve efficiency of frame calculation and program stability. The following describes an exemplary application of the electronic device provided in the embodiment of the present application, and the electronic device provided in the embodiment of the present application may be implemented as various types of terminals or user terminals such as a smart phone, a smart watch, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), an intelligent voice interaction device, an intelligent home appliance, and a vehicle-mounted terminal, and may also be implemented as a server. In the following, an exemplary application will be explained when the electronic device is implemented as a server.
Referring to fig. 3A, fig. 3A is an alternative architecture diagram of the frame computing system 100 provided in the embodiment of the present application, the terminal 400 is connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of both.
The terminal 400 runs an application 410 related to frame calculation, such as a game application, a video interaction application, and the like, taking the game application as an example, when the terminal 400 refreshes a game screen through a game engine during the running process of the game application 410, a frame calculation task may be generated according to frame data of a current frame, such as user operation data and game screen rendering data, and the frame calculation task is submitted to the server 200, and frame calculation is performed by using a calculation resource of the server 200.
The server 200 is configured to determine, according to a frame calculation task corresponding to a current frame, a current frame memory allocator corresponding to the current frame from preset M frame memory allocators; m frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2; acquiring a target memory block corresponding to a current frame from M memory block lists of a preset memory pool through a current frame memory distributor; m memory block lists correspond to M frame memory distributors one by one; address spaces among the M memory block lists are not coincident; and acquiring memory resources from the target memory block and distributing the memory resources to the frame calculation task.
The server 200 is further configured to execute a frame calculation task by using the memory resource, so as to obtain a frame calculation result. The frame calculation result is transmitted to the terminal 400 through the network 300 and displayed on the interface of the application 410 of the terminal 400. In some embodiments, the frame calculation result may be a game screen response generated from the user operation data, and a corresponding frame screen display refresh is performed in the game application 410 of the terminal 400.
In some embodiments, when the electronic device is implemented as a server, the electronic device may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers. In some embodiments, the embodiments of the present application may be implemented by technologies such as AR and VR, and the application 410 running on the terminal 400 may be an application or a client of the AR or VR; in some embodiments, the embodiments of the present application may be implemented by means of cloud technology; the application 410 running on the terminal 400 may be a cloud gaming application based on cloud technology; accordingly, the server 200 may also be a cloud server providing basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communication, middleware services, domain name services, security services, CDNs, and big data and artificial intelligence platforms. When the electronic device is implemented as a terminal, the electronic device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a smart traffic and a vehicle-mounted terminal, but is not limited thereto.
The embodiment of the present application can also be implemented by using a block chain technique, referring to fig. 3B, where fig. 3B is a schematic structural diagram of the frame computing system 100 provided in the embodiment of the present application. In fig. 3B, the frame calculation server may be a background server corresponding to the game application, and frame data submitted by the game application on the terminal 400 may be calculated by a plurality of frame calculation servers (frame calculation servers 601 and 602 are exemplarily shown in fig. 3B).
In some embodiments, the frame computation server and the terminal may join the blockchain network 500 as one of the nodes. The type of blockchain network 500 is flexible and may be, for example, any of a public chain, a private chain, or a federation chain. Illustratively, the frame computation server 601 is mapped to a consensus node 500-1 in the blockchain network 500 and the frame computation server 602 is mapped to a consensus node 500-2. The frame calculation server 601 and the frame calculation server 602 may perform frame calculation on frame data sent by the terminal 400 by performing an intelligent contract, and send the frame calculation results to the blockchain network 500 for consensus, respectively. When the consensus passes, the frame calculation result is transmitted to the terminal 400, so that the game application 410 on the terminal 400 refreshes the game screen according to the frame calculation result. Therefore, the frame calculation results are subjected to consensus confirmation through a plurality of nodes in the block chain network, the influence of individual server error calculation is avoided through a consensus mechanism, and the accuracy of frame calculation is further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a server 200 according to an embodiment of the present application, where the server 200 shown in fig. 4 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 4.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 250 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless-compatibility authentication (Wi-Fi), and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the memory management device provided in this embodiment of the present application may be implemented in software, and fig. 4 illustrates the memory management device 255 stored in the storage 250, which may be software in the form of programs and plug-ins, and includes the following software modules: allocator determination module 2551, memory fetch module 2552, and memory allocation module 2553, which are logical and thus may be arbitrarily combined or further divided depending on the functionality implemented.
The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the memory management method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In some embodiments, the terminal or the server may implement the memory management method provided in the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; the Application program may be a local (Native) Application program (APP), that is, a program that needs to be installed in an operating system to be executed, such as a social Application APP or a message sharing APP; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet or web client that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The memory management method provided by the embodiment of the present application will be described with reference to exemplary applications and implementations of the electronic device provided by the embodiment of the present application.
Referring to fig. 5, fig. 5 is an optional flowchart of a memory management method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 5.
S101, determining a current frame memory distributor corresponding to a current frame from preset M frame memory distributors according to a frame calculation task corresponding to the current frame; m frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2.
The memory management method provided by the embodiment of the present application may be applied to a scene of providing memory allocation for an application program or service related to frame calculation, for example, a scene of providing memory allocation management for frame calculation for a game application, a VR application, an AR application, or applications such as a video production application, a video interaction application, a video editing application, or the like, or a product or service in a meta-universe application scene, and is specifically selected according to an actual situation, and the embodiment of the present application is not limited.
In the embodiment of the present application, an application or service related to frame calculation is run on the electronic device, and when the application or service executes frame calculation, for example, when a game performs page refresh in a running process, it is necessary to apply for a corresponding memory resource to the electronic device to perform frame calculation according to a frame calculation task corresponding to a current frame.
In the embodiment of the present application, the frame calculation task may be a calculation task corresponding to a complete frame of data of the current frame, or the calculation process of the current frame may be divided into a plurality of frame calculation tasks to be executed, for example, the calculation of the current frame is completed by the frame calculation tasks of different calculation services. For example, for a game scene, a complete frame of frame computation may be divided into computing an interactive logic part in a frame and computing an image rendering part in a frame, and the computation of the complete frame may be completed by performing a corresponding frame computation task, respectively. The specific selection is performed according to actual conditions, and the embodiments of the present application are not limited.
In the embodiment of the application, preset M memory distributors are pre-deployed on the electronic device. The M frame memory distributors take frames as units, and each memory distributor correspondingly provides memory distribution for frame calculation of one frame. Therefore, under the condition of multi-frame parallel computation, due to the fact that independent memory distributors are provided for different frames, when a frame computation task of the next frame wrongly accesses the memory with the failure of the previous frame, problems can be found in time and located, and the problem that the wrong access of the memory is difficult to check errors caused by the repetition of inter-frame addresses is solved. In actual development, wrong access to a frame memory often occurs in the next frame, the probability of invalid memory before accessing M frames is extremely low, and the embodiment of the application can realize quick positioning of address wrong access through M frame memory distributors taking the frames as units, so that the frame memory access errors can be found in time in the development process, errors which are difficult to be checked in game testing or online are avoided, the development efficiency and quality are improved, and the stability of memory management is improved.
Due to the fact that the memory address space is limited, the M frame memory distributors are used circularly, and therefore the fact that memory address distribution among M frames is not repeated is guaranteed.
In the embodiment of the application, M is a positive integer greater than or equal to 2. In some embodiments, M may take a value of 6, or may take a value of another value, which is specifically selected according to an actual situation, and the embodiment of the present application is not limited.
S102, acquiring a target memory block corresponding to a current frame from M memory block lists of a preset memory pool through a current frame memory distributor; m memory block lists correspond to M frame memory distributors one by one; the address spaces between the M memory block lists do not coincide.
In the embodiment of the application, a preset memory pool is deployed on an electronic device, and memory resources in the preset memory pool are maintained in the form of M memory block lists, where the M memory block lists are in one-to-one correspondence with M memory distributors, and a memory address space included in each memory block list is not overlapped with address spaces in other memory block lists.
In the embodiment of the present application, for a memory request of a current frame, an electronic device determines, in a preset memory pool, a memory block list of an independent address space corresponding to a current frame memory distributor through a frame memory distributor corresponding to the current frame, as a memory block list corresponding to the current frame. And then, the allocable memory block resources are obtained from the memory block list corresponding to the current frame, and are used as the target memory block corresponding to the current frame.
It can be seen that the memory address spaces corresponding to different frame memory distributors can be effectively separated through the memory block lists of the independent address spaces corresponding to each frame memory distributor, so that repeated distribution of memory addresses is prevented, and moreover, the independent frame memory distributors are used for performing memory distribution in the memory block lists of the independent address spaces, so that an independent distribution starting point is provided for each frame during multi-frame parallel computing, the influence of the resetting of the distribution starting point and the memory redistribution when one frame is finished on the operating memory environment of other frames is avoided, and the support of multi-frame parallel computing is realized.
S103, acquiring memory resources from the target memory block and distributing the memory resources to the frame calculation task.
In the embodiment of the application, the electronic device acquires the memory resource with the corresponding size from the target memory block according to the size of the memory resource applied by the frame calculation task of the current frame, and allocates the memory resource with the corresponding size to the frame calculation task, so that memory allocation of the current frame is completed.
In this embodiment, the electronic device may implement memory allocation of a frame calculation task for each frame in an application in the same execution process as the memory allocation for the current frame. And will not be described in detail herein.
It can be understood that, under the scene of multi-frame parallel computation, the frame memory allocator which takes the frame as the unit performs memory allocation for each frame, so that when the frame computation task of the current frame has wrong access to the memory address which has failed in the previous frame, the operating system can find the problem in time and report the error and position quickly, thereby improving the stability of memory management. And the independent frame memory distributor is used for distributing the memory in the memory block list of the independent address space, so that the memory addresses distributed for different frames are not overlapped under the scene of multi-frame parallel computation, an independent distribution starting point is provided for the different frames, the influence of one frame on the running memory environment of other frames when the distribution starting point is reset and the memory is redistributed is reduced, the support of multi-frame parallel computation is realized, and the program running efficiency is improved.
In some embodiments, based on fig. 5, as shown in fig. 6, before S102, S001-S002 may be further performed, which will be described in conjunction with the steps.
And S001, determining whether the current frame state of the current frame is matched with the memory life cycle applied by the memory request through the current frame memory distributor.
In the embodiment of the application, the current frame state represents the service type of the frame calculation task. When the electronic device executes frame calculation, the frame states of a frame in different calculation stages can be defined according to the service types corresponding to different stages of the frame calculation.
In some embodiments, taking a game scene as an example, during the running of a game application, when refreshing each frame of game screen, a frame calculation task generally involves: and calculating game logic response according to the operation of the user on the game interface, and rendering two frame calculation service types of the picture corresponding to the response result. Thus, the frame state corresponding to the part of the frame calculation task for processing the interaction of the player and calculating the Game logic can be defined as a Game (Game) frame; a frame state corresponding to a portion where the game object is rendered on the screen is defined as a Render (Render) frame. Of course, the frame computing service may also be divided according to other dividing manners, specifically, the dividing manner is selected according to the actual situation, and the embodiment of the present application is not limited.
In this embodiment, the electronic device may pre-define the type of the life cycle of the memory corresponding to each frame state according to the frame state. For example, for the case of dividing the game interaction frame of a complete frame into the game frame and the rendering frame, the memory life cycle of the game interaction frame of a complete frame may include: the memory life cycle corresponding to the game frame, the memory life cycle corresponding to the rendering frame, and the memory life cycle corresponding to the game interactive frame, i.e., the full frame. Correspondingly, for frame states defined in other manners, a corresponding memory life cycle may also be defined in a similar manner, specifically selected according to an actual situation, and the embodiment of the present application is not limited.
In the embodiment of the application, the electronic device can acquire the frame state of the current frame and the memory life cycle to be applied by analyzing the memory request of the current frame. The electronic equipment determines whether the frame state of the current frame is matched with the memory life cycle through the frame memory distributor corresponding to the current frame.
For example, the timing at which the frame calculation task performs the frame calculation may be as shown in fig. 7, where the frame calculation task of the 1 st frame game interaction frame performs the frame calculation of which the frame status is the game frame at time t0, performs the frame calculation of which the frame status is the rendering frame at time t1, and in parallel, the 2 nd frame calculation task performs the frame calculation of which the frame status is the game frame at time t1, and performs the frame calculation of which the frame status is the rendering frame at time t 2. Then the memory life cycle of the game frame applied at time t0 expires at time t1, and access to it is illegal at time t2, determined that the frame status does not match the memory life cycle. Similarly, the memory lifetime of the 1 st frame rendering frame applied at time t1 expires at time t2, and it is illegal to access it at time t3, and it is determined that the frame status does not match the memory lifetime. The memory life cycle of the full frame of frame 1 applied at time t0 is expired at time t2, and it is illegal to access it at time t3, and it is determined that the frame status does not match the memory life cycle.
And S002, under the condition that the frame state is not matched with the memory life cycle, not executing memory allocation and carrying out error prompt.
In the embodiment of the present application, under the condition that the frame status is not matched with the memory life cycle, it is described that the memory request illegally applies for the memory occupation cycle that is not matched with the frame status, and the electronic device does not execute memory allocation and performs an error notification.
It should be noted that, in this embodiment of the application, under the condition that the frame state of the current frame matches the memory life cycle, the electronic device may execute the method in S102, and obtain, according to the current frame memory allocator, the target memory block corresponding to the current frame from the M memory block lists.
It can be understood that, by subdividing the life cycle, the memory resource occupied by the frame calculation task can be recovered earlier, unnecessary memory occupation is reduced, and memory allocation is more reasonable, so that the memory utilization efficiency is improved, and the memory resource occupation of the program is reduced.
In some embodiments, each of the M frame memory allocators includes: at least one sub memory allocator corresponding to the at least one frame state. Based on fig. 5, as shown in fig. 8, S102 may be implemented by performing S1021-S1022, which will be described in conjunction with the respective steps.
S1021, according to the current frame state of the current frame, determining a target sub memory distributor from at least one sub memory distributor corresponding to the current frame memory distributor; the current frame state belongs to at least one frame state; the current frame state represents the service type of the frame calculation task.
In this embodiment, based on the divided frame states, an independent sub-memory allocator may be provided in each frame memory allocator for each frame state of at least one frame state, so as to implement memory allocation. In this way, the electronic device may determine, from the at least one sub-memory allocator corresponding to the current-frame-memory allocator, the sub-memory allocator corresponding to the current-frame state as the target sub-memory allocator according to the current-frame state of the current frame.
Illustratively, the at least one frame state may include the game frame, the rendering frame and the full frame, and the at least one sub-memory allocator may include: a game frame sub memory allocator, a render frame sub memory allocator, and a full frame sub memory allocator. And under the condition that the current frame state is the game frame, the electronic equipment determines that the game frame sub memory distributor in the current frame memory distributor is the target sub memory distributor, and the other similar operations are carried out.
And S1022, acquiring the target memory block from the preset memory pool through the target sub-memory distributor.
In this embodiment of the present application, the electronic device obtains, from a memory block list corresponding to a current frame memory allocator, allocable memory block resources as a target memory block through a target sub-memory allocator in the current frame memory allocator.
In some embodiments, based on the above division of frame computation tasks according to frame status, the electronic device may implement frame computation in a parallel mode of multiple frames and multiple threads. Taking the frame state of the Game scene including the Game frame, the rendering frame and the full frame as an example, referring to fig. 7, the electronic device may respectively perform the calculation on the Game (Game) frame and the rendering (Render) frame through independent threads running in parallel, that is, the Game main thread and the Render main thread shown in fig. 7. At the same time, the frame calculation of the rendering frame of the ith frame and the game frame of the (i + 1) th frame can be executed through two parallel threads on the electronic device. Illustratively, as at time t1, frame calculations for the 2 nd frame of the Game frame are performed in the Game main thread, while frame calculations for the 1 st frame of the rendered frame are performed in the parallel Render main thread. Therefore, the utilization rate of the processor can be increased, and the efficiency of game operation is improved.
It should be noted that the Game main thread and the Render main thread shown in fig. 7 are an exemplary case of thread division in frame calculation, and in practical application, the thread division may be performed according to different requirements, specifically, the thread division is performed according to the actual situation, and the embodiment of the present application is not limited.
In some embodiments, corresponding to the parallel mode of multiple threads, each of the memory sub-allocators described above may include a sub-allocator for the number of threads, where the number of threads is the number of threads included in the frame calculation task, and the number of threads may be, for example, the maximum number of threads supported by the electronic device, the number of threads currently used in the frame calculation task, or set according to actual situations. And in the sub-distributors of the thread number, each sub-distributor corresponds to each thread of the frame calculation task one by one. Based on fig. 8, as shown in fig. 9, S1022 in fig. 8 can be implemented by executing S201 as follows:
s201, synchronously acquiring memory resources corresponding to each thread in the frame computing task from the target memory block through each sub-distributor in the target sub-memory distributors, and correspondingly distributing the memory resources to each thread to complete memory distribution of the frame computing task.
In the embodiment of the present application, each sub memory allocator corresponds to each frame memory allocator, and each sub memory allocator corresponds to one frame state, that is, a memory life cycle, and the sub memory allocators of the number of threads included in each sub memory allocator correspond to the threads included in the frame calculation task one by one, and it can be understood that each sub memory allocator in each sub memory allocator is a memory allocator with independent frame, independent life cycle, and independent thread. For the current frame, the electronic device synchronously obtains the memory resource corresponding to each thread in the frame calculation task of the current frame from the target memory block through each sub-distributor in the target sub-memory distributors, and distributes the corresponding memory resource to each thread through the sub-distributor corresponding to each thread, thereby completing the memory distribution of the frame calculation task.
In this embodiment of the present application, each memory block in each memory block list is a memory block with a fixed size, and the electronic device may obtain, through the target sub-memory allocator, a target memory block with a fixed size from the memory block list corresponding to the current frame, and then, through the sub-allocators with the thread number, perform, in the target memory block, memory allocation for each thread in the frame computation task according to different memory resource numbers required by each thread.
In some embodiments, the sub-distributor may be a linear memory distributor, or may be another type of memory distributor, which is specifically selected according to the actual situation, and the embodiments of the present application are not limited.
It can be understood that, because the sub-distributors are frame-independent, life cycle-independent, and thread-independent memory distributors, the thread-independent memory distribution can be performed by performing the memory distribution through the sub-distributors of the number of threads, so as to realize the efficient lock-free thread safety, thereby improving the efficiency of program operation.
In some embodiments, as shown in FIG. 10, the M frame memory allocator may be 6 frame memory allocators that are recycled (frame memory allocators F0-F5 are illustratively shown). The 6 frame memory distributors are used for performing memory distribution for each frame calculation so as to ensure that memory addresses distributed among 6 frames are not repeated. Each frame memory allocator holds its independent ID, e.g., F0-F5, and records the frame status of the corresponding frame. The default memory pool includes a list of 6 memory blocks corresponding to the frame memory allocators F0-F5. In fig. 10, 6 frame memory distributors share a static thread-independent sub memory distributor matrix, and the sub memory distributors corresponding to each frame memory distributor in the sub memory distributor matrix correspond to the sub memory distributors corresponding to each life cycle, and each sub memory distributor includes the sub memory distributors with the number of threads. A sub-memory allocator matrix containing a Game (Game) frame sub-memory allocator, a Render (Render) frame sub-memory allocator, and a full frame sub-memory allocator is shown in fig. 10. When allocating memory for the frame calculation task of the current frame, the electronic device determines the ID of the current frame memory allocator corresponding to the current frame, and further obtains a corresponding target sub memory allocator (such as a Game sub memory allocator) from the sub memory allocator matrix according to the ID of the current frame memory allocator (such as F2) and the memory life cycle applied by the frame calculation task, and further accesses a preset memory pool according to the sub memory allocator with the number of threads in the target sub memory allocator, and obtains memory block resources from a memory block list corresponding to the current frame memory allocator for memory allocation.
In some embodiments, as an internal mechanism for implementing thread safe allocation by the frame memory allocator, the actual memory allocation is performed by the sub-allocator corresponding to each thread inside the frame memory allocator, so as to ensure that there is no conflict in memory allocation among threads. When applying for the memory, the frame calculation task only needs to initiate an application to the frame memory distributor corresponding to the current frame, and does not need to know thread safety implementation details in the frame memory distributor.
It can be seen that when the memory allocation apparatus shown in fig. 10 is applied to the parallel frame calculation mode of multi-frame multithreading, the allocated addresses within M frames cannot be duplicated by the frame memory allocator because the corresponding memory address spaces of each frame memory allocator are not coincident. Taking the frame calculation mode in fig. 7 as an example, after the 1 st frame game frame in fig. 7 is calculated, the 2 nd frame game frame will be run, at this time, the frame memory allocator corresponding to the 1 st frame game frame has already performed memory allocation information reset after the calculation is finished, and has performed invalidation processing on the address space corresponding to the 1 st frame game frame, and once the address allocated by the 1 st frame is erroneously used in the 2 nd frame, the operating system will immediately throw an exception. Rendering the frame is the same.
It can be understood that by the method of the embodiment of the application, the memory address allocation is not conflicted in a multi-frame and multi-thread parallel computing scene, so that the computational resources of multiple threads can be fully utilized, the memory access error can be timely found, and the memory management stability and the program running efficiency are improved.
In some embodiments, the address space of each memory block list in the memory pool is preset to be a virtual address space, that is, the memory address maintained and managed by each memory block list is a virtual address. Based on fig. 5 or fig. 8, the obtaining of the target memory block from the preset memory pool in S102 and S1022 may be implemented by executing S301 to S303 as shown in fig. 11, and will be described with reference to the steps.
S301, determining whether the free memory block in the memory block list corresponding to the current frame meets the memory resource amount according to the memory resource amount applied by the memory request.
In this embodiment of the present application, before the electronic device obtains the target memory block from the preset memory pool, it is required to determine a memory block list corresponding to the current frame memory manager, that is, in the memory block list corresponding to the current frame, whether the idle memory block satisfies the memory resource amount applied by the frame calculation task of the current frame.
And S302, under the condition of being not satisfied, applying for a virtual memory from an operating system of the electronic equipment through a preset memory pool, and adding a free memory block in a memory block list corresponding to the current frame according to the applied virtual memory.
In this embodiment of the present application, when an idle memory block in a memory block list corresponding to a current frame cannot satisfy the amount of memory block resources applied by the current frame, an electronic device applies to an operating system through a preset memory pool to add a new virtual address space in the memory block list corresponding to the current frame, and adds a corresponding idle memory block in the memory block list corresponding to the current frame according to a virtual address space newly allocated by the operating system.
It should be noted that, because the memory block in each memory block list is of a fixed size, when the electronic device applies for a new memory block, the electronic device also applies for the new memory block of the fixed size.
And S303, taking the free memory block as the target memory block under the condition of meeting the requirement.
In this embodiment of the application, when the free memory block in the memory block list corresponding to the current frame can satisfy the amount of the memory resource applied for the current frame, the electronic device directly uses the free memory block as the target memory block, and can further allocate the memory resource corresponding to the amount of the memory resource by using the virtual address in the free memory block.
In some embodiments, based on S301-S303, as shown in fig. 11, S103 may be implemented by performing S304 as follows:
s304, submitting the virtual address in the target memory block to an operating system so that the operating system allocates the memory resource of the physical address to the frame calculation task according to the virtual address.
In this embodiment, the electronic device may submit the virtual addresses of the memory resources required by each thread in the target memory block to an operating system of the electronic device through the sub-allocator. The operating system of the electronic device can correspondingly allocate the memory resources of the physical address to each thread according to the virtual address, thereby completing the memory allocation of the whole frame calculation task.
It will be appreciated that since the memory allocation is a virtual address space, the actual physical memory may be shared between different frames. The virtual memory address is managed through the preset memory pool, after the virtual memory address is distributed, only the memory space which is actually used after the virtual memory address is distributed is occupied, the memory waste of the distributor is greatly reduced, and meanwhile, the memory address can be immediately invalid through the calling of a system method, so that the memory distribution efficiency and stability are improved.
In some embodiments, based on fig. 5 or fig. 8, as shown in fig. 12, after S103, that is, after the electronic device completes memory allocation, a memory resource recycling process after the frame calculation task is finished may be further implemented by executing S104 to S106, which will be described with reference to the steps shown in fig. 12.
And S104, under the condition that a current frame state ending instruction is received, updating the current frame state through the current frame memory distributor, and informing the target sub memory distributor to reset memory distribution information.
In the embodiment of the application, the current frame state represents the service type of the frame calculation task, and in the memory allocation stage, the electronic equipment allocates the memory for the frame calculation task of the current frame according to the current frame state of the current frame; and under the condition that the frame calculation task is completed, triggering a current frame state ending instruction to represent that the current frame state, such as the frame data corresponding to the rendering frame and the game frame, is completed by calculation. And the electronic equipment starts a recovery process of the allocated memory resources corresponding to the current frame state under the condition of receiving the current frame state ending instruction.
In the embodiment of the application, the electronic device can update the current frame state to the end state through the current frame memory distributor under the condition that the current frame state end instruction is received. Here, the ending state represents that the execution of the computing service corresponding to the current frame state is ended. Illustratively, when the current frame state is a game frame, the current frame state is updated to a game frame end state, and when the current frame state is a rendering frame, the current frame state is updated to a rendering frame end state.
In the embodiment of the application, the electronic device notifies the sub-memory allocator with the thread number in the target sub-memory allocator to reset the memory allocation information through the current frame memory allocator. Here, the target sub memory allocator is also a sub memory allocator that allocates memory for the frame calculation task corresponding to the current frame state during the memory allocation process. The electronic device may record a target sub memory allocator invoked during the memory allocation process, so as to notify the corresponding target sub memory allocator to execute the memory recovery process when the frame status calculation is finished.
In some embodiments, based on fig. 10, as shown in fig. 13, each frame memory allocator instance of the M frame memory allocators maintains two sub-allocator lists. The Game sub-allocator list records a target sub-memory allocator for allocating memory for a frame calculation task of a Game frame (Game Stage), and after Game frame calculation is finished, the electronic device may notify the target sub-memory allocator in the Game sub-allocator list to reset memory allocation information. The Render sub-allocator list records target sub-memory allocators for allocating memories for frame calculation tasks of a Render frame (Render Stag) and a full frame, and when the Render frame is finished, that is, the full frame is finished, the electronic device informs the target sub-memory allocators of the Render sub-allocator list to reset memory allocation information.
Here, it should be noted that, since the memory lifecycle corresponding to the full frame and the memory lifecycle corresponding to the rendered frame are disabled simultaneously, the electronic device may be responsible for memory failure management of the sub-distributors of the two memory lifecycles corresponding to the full frame and the rendered frame through the Render sub-distributor list.
And S105, resetting memory allocation information of the memory block list corresponding to the current frame through the target sub-memory allocator, and performing address invalidation processing on the allocated virtual address corresponding to the frame calculation task.
In this embodiment of the present application, the electronic device may reset the memory allocation information in the memory block list corresponding to the current frame. Here, the memory allocation information may include, but is not limited to, allocation information such as the size of allocated memory and an allocation starting point, which is specifically selected according to the actual situation, and the embodiment of the present application is not limited.
In the embodiment of the application, since the allocated memory resource corresponding to the frame calculation task of the current frame is a virtual memory address, the electronic device may perform invalidation processing on the allocated virtual address corresponding to the frame calculation task of the current frame through the target sub-memory allocator, but retain an address space. Therefore, the memory waste is avoided, and the error access of other frames to the failed address is avoided.
In some embodiments, on a Windows operating system, the electronic device may call the VirtaulFree method using the MEM _ DE COMMIT parameter, invalidating the virtual memory address but preserving the address space; on a Linux/macOS/Android/iOS operating system, the electronic device may call the mprotect method to make the protected address page an inaccessible mode, and call the madvise method using MADV _ DONTNEED to inform the operating system that the segment of virtual memory is no longer used, so that the virtual memory address is invalidated but the address space is reserved. The specific failure processing method may be selected according to actual conditions, and the embodiment of the present application is not limited.
And S106, under the condition that the state representation of the current frame completes the complete frame calculation, returning the virtual address subjected to the address invalidation treatment to a preset memory pool.
In the embodiment of the present application, after the current frame memory allocator updates the current frame state, when the current frame state representation completes a complete frame calculation, for example, when the current frame state is rendering frame end or full frame end, which indicates that the game frame and the rendering frame of the one-frame game interaction frame in fig. 7 both complete frame calculation, that is, the current frame calculation has completely ended, the corresponding memory resource may be returned. And the electronic equipment returns the virtual address subjected to address invalidation processing to the preset memory pool through the target sub-memory distributor, so that the memory recovery work is completed.
It should be noted that, under the condition that the frame state representation has not completed a complete frame of calculation, the electronic device does not return the memory resource subjected to address invalidation processing to the preset memory pool immediately. Taking the game frame with the state of the frame 1 as an example, after the game frame of the frame 1 is calculated, the frozen memory is not immediately returned to the global memory pool, otherwise, the frozen memory is used by the rendering frame of the frame 1, so that the error that the rendering frame uses the game frame virtual address with the past life cycle cannot be found. Therefore, in the embodiment of the present application, when the frame status representation has completed a complete frame calculation, the sub memory allocator returns the virtual address occupied by the current frame.
It can be understood that, in the embodiment of the present application, when performing memory recovery, the electronic device may only perform invalidation processing on a virtual memory address corresponding to frame calculation with a lifetime less than one frame, for example, frame calculation of a game frame, so that memory waste is avoided, and memory access errors are avoided, thereby improving the stability of memory management.
Next, based on the memory management device shown in fig. 13, taking a memory allocation process of frame-by-frame calculation in an interactive rendering game scene as an example, an exemplary application of the embodiment of the present application in an actual application scene will be described with reference to fig. 14 and fig. 15.
In some embodiments, based on fig. 14, an embodiment of the present application provides a method for allocating memory for frame calculation in a game scene, including:
s401, the frame calculation task acquires current frame data.
The frame calculation task in S401 is a frame data calculation task for refreshing a frame corresponding to a game application, and the frame calculation task requests a frame manager on the electronic device to acquire frame data of a current frame to be calculated, so as to start frame calculation of the current frame.
S402, the frame computing task acquires a frame memory allocator corresponding to the current frame.
In S402, the frame calculation task may determine, from the M frame memory allocators, a frame memory allocator corresponding to the current frame, according to the sequence number of the current frame, so as to perform memory allocation for the current frame.
S403, the frame computing task applies for the memory from the frame memory distributor.
In S403, the frame memory allocator on the electronic device may receive a memory request initiated by the frame calculation task. The memory request may include a frame state of the current frame and a memory life cycle for which the frame calculation task is to be applied.
S404, the frame memory distributor checks whether the memory life cycle accords with the frame state.
In S404, the frame memory allocator determines whether a frame state of the current frame, such as a game frame, a rendering frame, or a full frame, matches a memory life cycle corresponding to the current frame. If yes, executing S405; otherwise, S412 is performed.
S405, the frame memory allocator obtains the sub memory allocator according to the ID and the memory life cycle.
In S405, if the memory life cycle requested by the frame calculation task is reasonable, that is, the memory whose life cycle is Game or full frame is requested in the Game frame, and the memory whose life cycle is Render or full frame is requested in the frame rendering (at this time, the full frame life cycle is equivalent to the Render life cycle), the frame memory allocator obtains the corresponding sub memory allocator from the sub memory allocator matrix according to the ID of the frame memory allocator and the requested memory life cycle.
S406, the sub-memory allocator determines whether the last virtual memory block satisfies the requested size,
in S406, the sub-memory allocator determines a corresponding target memory block list in the preset memory pool according to the ID of the frame memory allocator, and further determines whether the remaining memory of the last virtual memory block in the target memory block list satisfies the size requested by the memory request of the current frame. If not, executing S408, otherwise, executing S407.
S407, the sub memory distributor distributes the memory applied by the thread.
In S407, when the last virtual memory block satisfies the application size, the sub-memory distributor uses the last virtual memory block as a target memory block, submits a virtual address in the last virtual memory block to the operating system, and allocates corresponding physical memory to each thread in the frame computation task by the operating system.
S408, the sub-memory distributor applies for the virtual memory block from the global virtual memory pool.
In S408, the global virtual memory pool is equivalent to the preset memory pool, and is used to maintain a virtual address and provide two function call interfaces, i.e., a request memory block and a return memory block. The sub memory distributor may apply for the virtual address memory block from the global virtual memory pool through the function call interface requesting the memory block.
S409, the global virtual memory pool judges whether the target memory list has free memory blocks.
In S409, when the global virtual memory pool allocates a virtual memory block, first checking whether an idle memory block in a target memory list corresponding to a current frame is empty, and if so, executing S411 if the target memory list does not have an idle memory block; otherwise, S410 is performed.
And S410, distributing the memory applied by the thread by the global virtual memory pool.
In S410, when the target memory list includes the free memory block, the global virtual memory pool submits the memory address in the free memory block to the operating system, and the operating system allocates the physical memory to each thread in the frame calculation task.
S411, the global virtual memory pool applies for the virtual memory from the operating system.
In S411, when there is no free memory block in the target memory list, the global virtual memory pool applies for a fixed size virtual memory to the system, and according to the newly applied virtual memory, a free memory block is newly added in the target memory list, so as to allocate the memory resource applied by the thread from the free memory block.
S412, the frame memory distributor returns error information.
In S412, if the memory life cycle does not match the frame state, if the thread is in the frame state of the game frame, requesting to render the memory life cycle corresponding to the frame; or, under the frame state of the rendering frame, requesting the life cycle corresponding to the game frame, not performing memory allocation, and returning an error prompt.
It can be understood that, in the embodiment of the present application, through M independent frame memory distributors that cyclically use frames, non-multiplexing of M inter-frame addresses is achieved, so that a problem that memory access errors are difficult to find errors due to inter-frame address repetition is solved, fast location of address access errors is achieved, and stability of memory management is improved. And the memory resource allocation is executed from the memory space of the independent virtual address through the sub-memory allocator with independent thread, independent frame and independent life cycle, so that the memory allocation with independent thread can be carried out, the high-efficiency lock-free thread safety is realized, the virtual addresses allocated by the threads are not overlapped, the calculation of multithreading is supported, and the program operation efficiency is improved.
In some embodiments, based on fig. 14, an embodiment of the present application provides a method for memory recovery when a frame status ends, as shown in fig. 15, as follows:
and S501, the frame manager finishes the calculation of the current frame.
And S502, informing the frame memory distributor to update the frame state.
In S501-S502, when the frame calculation task of the current frame is finished, the frame manager notifies the current frame memory allocator to update the frame state of the current frame to the finished state.
S503, the frame memory allocator updates the frame state of the current frame.
In S503, the frame memory allocator updates the frame state of the current frame to the end state.
S504, the frame memory distributor informs all the sub memory distributors in the sub memory distributor list corresponding to the ending state to execute the reset operation.
And S505, resetting the memory allocation information by the sub memory allocator.
S506, freezing the memory.
In S505-S506, the sub memory allocator resets allocation information such as the size of the allocated memory and the allocation starting point, and freezes the corresponding memory block address to make the memory address inaccessible, but retains the address space.
S507, the sub memory allocator determines whether the previous frame status is a render frame.
In S507, the sub-memory allocator may recycle the mapped physical memory when appropriate. For the current frame with the current frame state as the end state, the sub memory distributor confirms whether the frame state frame of the previous frame state is a rendering frame. If yes, it indicates that a complete frame in the game application has been calculated, the sub memory distributor executes step S508 to return the memory blocks; otherwise, no operation is executed until the judgment is yes, namely the previous frame state is a rendering frame, and the memory blocks are returned uniformly.
And S508, returning the memory blocks by the sub memory allocator.
And S509, recording the returned memory block as an idle memory block by the global virtual memory pool.
In S508-S509, the sub-memory allocator returns all the frozen memory blocks to the global virtual memory pool. And the global virtual memory pool is in a memory block list corresponding to the current frame, and the returned memory blocks are recorded as idle memory blocks.
It can be understood that, through the memory recovery process, the electronic device can recover the memory resource in time, thereby improving the utilization rate of the memory resource, and further improving the operation efficiency of the program and the stability of the memory management.
Continuing with the exemplary structure of the memory management device 255 provided in the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 4, the software modules stored in the memory management device 255 of the storage 250 may include:
a distributor determining module 2551, configured to determine, according to a memory request initiated by a frame calculation task of a current frame, a current frame memory distributor corresponding to the current frame from preset M frame memory distributors; the M frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2;
a memory obtaining module 2552, configured to obtain, by using the current frame memory distributor, a target memory block corresponding to the current frame from M memory block lists in a preset memory pool; the M memory block lists are in one-to-one correspondence with the M frame memory allocators; address spaces among the M memory block lists are not coincident;
a memory allocation module 2553, configured to acquire memory resources from the target memory block, and allocate the memory resources to the frame computation task.
In some embodiments, each of the M frame memory allocators comprises: at least one sub memory allocator corresponding to the at least one frame state; the memory obtaining module 2552 is further configured to determine, according to the current frame state of the current frame, a target sub memory distributor from at least one sub memory distributor corresponding to the current frame memory distributor; the current frame state belongs to the at least one frame state; the current frame state represents the service type of the frame calculation task; and acquiring the target memory block from the preset memory pool through the target sub-memory distributor.
In some embodiments, each of the at least one sub-memory allocator comprises a sub-allocator for a number of threads; the number of the threads is the number of threads contained in the frame calculation task; each sub-distributor corresponds to each thread of the frame calculation task one by one; the memory allocation module 2553 is further configured to synchronously acquire, through each of the target sub-memory allocators, a memory resource corresponding to each thread in the frame computation task from the target memory block, and allocate the memory resource to each thread correspondingly, so as to complete memory allocation for the frame computation task.
In some embodiments, the memory obtaining module 2552 is further configured to determine, by the current frame memory allocator, whether a current frame state of the current frame matches a memory life cycle applied by the memory request before obtaining the target memory block corresponding to the current frame from the M memory block lists in the preset memory pool through the current frame memory allocator; and under the condition that the frame state is not matched with the memory life cycle, not executing memory allocation and carrying out error prompt.
In some embodiments, the address space of the M memory block lists is a virtual address space; the memory obtaining module 2552 is further configured to determine, according to the amount of the memory resource applied by the memory request, whether an idle memory block in the memory block list corresponding to the current frame meets the amount of the memory resource; under the condition of not being satisfied, applying for a virtual memory from an operating system of the electronic equipment through the preset memory pool, and adding an idle memory block in the memory block list corresponding to the current frame according to the applied virtual memory; if the idle memory block is satisfied, taking the idle memory block as the target memory block; the memory allocation module 2553 is further configured to submit the virtual address in the target memory block to the operating system, so that the operating system allocates the memory resource of the physical address to the frame computation task according to the virtual address.
In some embodiments, the memory allocation module 2553 is further configured to, after the memory resources are obtained from the target memory block and allocated to the frame calculation task, update the current frame state through the current frame memory allocator and notify the target sub memory allocator of resetting memory allocation information when a current frame state end instruction is received; resetting memory allocation information of the memory block list corresponding to the current frame through the target sub-memory allocator, and performing address invalidation processing on the allocated virtual address corresponding to the frame calculation task; and under the condition that the current frame state representation completes the complete frame calculation, returning the virtual address subjected to address invalidation processing to the preset memory pool.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, as illustrated in fig. 5, 6, 8, 9, 11, 12, 14, and 15.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts in a hypertext Markup Language (HT ML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, in a multi-frame parallel computing scenario, virtual memory allocation is performed for each frame by using a frame memory allocator that takes a frame as a unit, so that when a frame computing task of a current frame erroneously accesses a memory address that has failed in a previous frame, an operating system can timely find a problem and quickly report an error and position, thereby improving the stability of memory management. And the independent frame memory distributor is used for distributing the memory in the memory block list of the independent address space, so that the memory addresses distributed for different frames are not overlapped under the scene of multi-frame parallel computation, an independent distribution starting point is provided for the different frames, the influence of one frame on the running memory environment of other frames when the distribution starting point is reset and the memory is redistributed is reduced, the support of multi-frame parallel computation is realized, and the program running efficiency is improved. And through the subdivision of the life cycle, the memory resource occupied by the frame calculation task can be recovered earlier, unnecessary memory occupation is reduced, and memory allocation is more reasonable, so that the memory utilization efficiency is improved, and the memory resource occupation of a program is reduced. And because the sub-distributors are frame-independent, life cycle-independent and thread-independent memory distributors, the thread-independent memory distribution can be performed by performing the memory distribution through the sub-distributors of the number of threads, so that the efficient lock-free thread safety is realized, and the program running efficiency is improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (10)

1. A memory management method, comprising:
determining a current frame memory distributor corresponding to a current frame from preset M frame memory distributors according to a memory request initiated by a frame calculation task of the current frame; the M frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2;
acquiring a target memory block corresponding to the current frame from M memory block lists of a preset memory pool through the current frame memory distributor; the M memory block lists are in one-to-one correspondence with the M frame memory allocators; address spaces among the M memory block lists are not coincident;
and acquiring memory resources from the target memory block and distributing the memory resources to the frame calculation task.
2. The method of claim 1, wherein each of the M frame memory allocators comprises: at least one sub memory allocator corresponding to the at least one frame state; the obtaining, by the current frame memory allocator, a target memory block corresponding to the current frame from M memory block lists in a preset memory pool includes:
determining a target sub memory distributor from at least one sub memory distributor corresponding to the current frame memory distributor according to the current frame state of the current frame; the current frame state belongs to the at least one frame state; the current frame state represents the service type of the frame calculation task;
and acquiring the target memory block from the preset memory pool through the target sub-memory distributor.
3. The method of claim 2, wherein each of the at least one sub-memory allocator comprises a sub-allocator for a number of threads; the number of the threads is the number of threads contained in the frame calculation task; each sub-distributor corresponds to each thread of the frame calculation task one by one; the obtaining memory resources from the target memory block and allocating the memory resources to the frame computation task includes:
and synchronously acquiring the memory resource corresponding to each thread in the frame computing task from the target memory block through each of the target sub-memory distributors, and correspondingly distributing the memory resource to each thread to finish the memory distribution of the frame computing task.
4. The method according to any one of claims 1 to 3, wherein before the obtaining, by the current frame memory allocator, the target memory block corresponding to the current frame from the M memory block lists in a preset memory pool, the method further includes:
determining whether the current frame state of the current frame is matched with the memory life cycle applied by the memory request through the current frame memory distributor;
and under the condition that the frame state is not matched with the memory life cycle, not executing memory allocation and carrying out error prompt.
5. The method according to claim 4, wherein the address space of the M memory block lists is a virtual address space; the obtaining of the target memory block corresponding to the current frame includes:
determining whether the idle memory block in the memory block list corresponding to the current frame meets the memory resource amount according to the memory resource amount applied by the memory request;
under the condition of not being satisfied, applying for a virtual memory from an operating system of the electronic equipment through the preset memory pool, and adding an idle memory block in the memory block list corresponding to the current frame according to the applied virtual memory;
if the idle memory block is satisfied, taking the idle memory block as the target memory block;
the obtaining memory resources from the target memory block and allocating the memory resources to the frame computation task includes:
and submitting the virtual address in the target memory block to the operating system so that the operating system allocates the memory resource of the physical address to the frame computing task according to the virtual address.
6. The method according to claim 5, wherein the obtaining of the memory resource from the target memory block is allocated to the frame computation task, and the method further comprises:
under the condition of receiving a current frame state ending instruction, updating the current frame state through the current frame memory distributor, and informing the target sub memory distributor to reset memory distribution information;
resetting memory allocation information of the memory block list corresponding to the current frame through the target sub-memory allocator, and performing address invalidation processing on the allocated virtual address corresponding to the frame calculation task;
and under the condition that the current frame state representation completes the complete frame calculation, returning the virtual address subjected to address invalidation processing to the preset memory pool.
7. A memory management device, comprising:
the distributor determining module is used for determining a current frame memory distributor corresponding to a current frame from preset M frame memory distributors according to a memory request initiated by a frame calculation task of the current frame; the M frame memory distributors are memory distributors circularly used by taking frames as units; m is a positive integer greater than or equal to 2;
a memory obtaining module, configured to obtain, by the current frame memory distributor, a target memory block corresponding to the current frame from M memory block lists in a preset memory pool; the M memory block lists are in one-to-one correspondence with the M frame memory allocators; address spaces among the M memory block lists are not coincident;
and the memory allocation module is used for acquiring memory resources from the target memory block and allocating the memory resources to the frame calculation task.
8. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 6 when executing executable instructions stored in the memory.
9. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the method of any of claims 1 to 6.
CN202111499167.3A 2021-12-09 2021-12-09 Memory management method, device, equipment, computer program and storage medium Pending CN114153615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111499167.3A CN114153615A (en) 2021-12-09 2021-12-09 Memory management method, device, equipment, computer program and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111499167.3A CN114153615A (en) 2021-12-09 2021-12-09 Memory management method, device, equipment, computer program and storage medium

Publications (1)

Publication Number Publication Date
CN114153615A true CN114153615A (en) 2022-03-08

Family

ID=80454096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111499167.3A Pending CN114153615A (en) 2021-12-09 2021-12-09 Memory management method, device, equipment, computer program and storage medium

Country Status (1)

Country Link
CN (1) CN114153615A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116764225A (en) * 2023-06-09 2023-09-19 广州三七极梦网络技术有限公司 Efficient path-finding processing method, device, equipment and medium
WO2024067348A3 (en) * 2022-09-28 2024-05-16 维沃移动通信有限公司 Memory allocator determination method and apparatus, and electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024067348A3 (en) * 2022-09-28 2024-05-16 维沃移动通信有限公司 Memory allocator determination method and apparatus, and electronic device and storage medium
CN116764225A (en) * 2023-06-09 2023-09-19 广州三七极梦网络技术有限公司 Efficient path-finding processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111729293B (en) Data processing method, device and storage medium
TWI696952B (en) Resource processing method and device
CN114153615A (en) Memory management method, device, equipment, computer program and storage medium
US8695079B1 (en) Allocating shared resources
CN109144619B (en) Icon font information processing method, device and system
CN104380256A (en) Method, system and executable piece of code for virtualisation of hardware resource associated with computer system
CN111880956B (en) Data synchronization method and device
US20130063463A1 (en) Real-time atlasing of graphics data
CN114329298A (en) Page presentation method and device, electronic equipment and storage medium
CN108073423A (en) A kind of accelerator loading method, system and accelerator loading device
CN112598565B (en) Service operation method and device based on accelerator card, electronic equipment and storage medium
CN115686346A (en) Data storage method and device and computer readable storage medium
CN113535087B (en) Data processing method, server and storage system in data migration process
CN115421787A (en) Instruction execution method, apparatus, device, system, program product, and medium
CN109558082B (en) Distributed file system
CN113032088A (en) Dirty page recording method and device, electronic equipment and computer readable medium
CN112434237A (en) Page loading method and device, electronic equipment and storage medium
CN114756788A (en) Cache updating method, device, equipment and storage medium
CN101196835A (en) Method and apparatus for communicating between threads
CN117668319B (en) Data query method, electronic device and storage medium
US9251101B2 (en) Bitmap locking using a nodal lock
CN109388498A (en) A kind of processing method of mutual exclusion, device, equipment and medium
CN114816032B (en) Data processing method and device, electronic equipment and storage medium
CN116991600B (en) Method, device, equipment and storage medium for processing graphic call instruction
CN116166572A (en) System configuration and memory synchronization method and device, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination