CN112732448A - Memory space allocation method and device and computer equipment - Google Patents

Memory space allocation method and device and computer equipment Download PDF

Info

Publication number
CN112732448A
CN112732448A CN202110065041.9A CN202110065041A CN112732448A CN 112732448 A CN112732448 A CN 112732448A CN 202110065041 A CN202110065041 A CN 202110065041A CN 112732448 A CN112732448 A CN 112732448A
Authority
CN
China
Prior art keywords
memory space
target
queue
target message
subspaces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110065041.9A
Other languages
Chinese (zh)
Inventor
苏向阳
孟宪刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202110065041.9A priority Critical patent/CN112732448A/en
Publication of CN112732448A publication Critical patent/CN112732448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a memory space allocation method, a memory space allocation device and computer equipment, wherein the memory space allocation method comprises the following steps: determining a target memory space and an initial position and a tail position of the target memory space according to the target message queue; constructing a circulating memory space queue connected end to end according to the initial position and the tail position of the target memory space; and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message. By implementing the method and the device, the problem that the real-time requirement of an automatic driving system cannot be met in the related technology is solved. By combining the constructed circular memory space queue, the new message can be directly stored in a section of memory at the tail end of the memory where the previous message is located, the transmitted data is stored in the memory space in time, the data storage requirement of the automatic driving system is responded in real time, the memory space fragments can be fully utilized, and the fragmentation of the system memory is reduced.

Description

Memory space allocation method and device and computer equipment
Technical Field
The invention relates to the technical field of computers, in particular to a memory space allocation method, a memory space allocation device and computer equipment.
Background
The automatic driving technology mainly comprises the following steps: sensing, planning, decision-making, control and the like. And the perception algorithm, the decision algorithm and the control algorithm are completed through the storage and transmission of data or information. When data is transmitted between the acquisition unit and the algorithm unit and between every two algorithm nodes, two steps of data caching and data publishing are required. Conventionally, a data issue unit applies for a memory to an operating system, and releases the memory after data issue.
In the process of applying for the memory and releasing the memory, the related technology is to search a continuous and idle minimum memory allocation unit when allocating a memory space, so that the problems of large time overhead, serious memory fragmentation and the like exist. In the automatic driving technology, the real-time requirement on the function module is very high, and especially the complexity of the automatic driving algorithm causes that the number of data distribution nodes is very large, the time overhead of memory operation is larger, the requirement of the automatic driving system on the real-time responsiveness in the scene of an emergency (for example, a pedestrian suddenly breaks in) is influenced, and the safety of the automatic driving system is possibly influenced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for allocating a memory space, and a computer device, so as to solve a problem that an existing memory allocation method cannot meet a real-time requirement.
According to a first aspect, an embodiment of the present invention provides a method for allocating a memory space, including: determining a target memory space and an initial position and a tail position of the target memory space according to a target message queue; constructing a circulating memory space queue connected end to end according to the initial position and the tail position of the target memory space, wherein the circulating memory space queue comprises a plurality of subspaces; and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message.
Optionally, the sequentially allocating, according to the size of the target message, subspaces in the circular memory space queue to the target message includes: determining the occupied quantity of the subspaces according to the size of each target message; and determining the storage position of each target message in the circulating memory space queue according to the occupied number, wherein the storage position comprises at least one subspace, and the storage positions occupied by each target message sequentially stored in the circulating memory space queue are continuous.
Optionally, when the storage location of the target message includes a plurality of subspaces, the plurality of subspaces included in the storage location of the target message are consecutive.
Optionally, the method further comprises: and when the size of the residual subspace in the circulating memory space queue is smaller than the size of the newly acquired new message, releasing each subspace in the circulating memory space queue according to the sizes of the residual subspace and the new message.
Optionally, the releasing each subspace in the circular memory space queue according to the size of the remaining subspace and the new message includes: determining the release quantity of the subspace according to the difference value of the sizes of the residual subspace and the new message; and sequentially releasing the subspaces with the corresponding quantity in the circulating memory space queue according to the release quantity.
Optionally, the method further comprises: and when the subspaces in the circulating memory space queue store the target messages and new target messages are acquired again, releasing each subspace in the circulating memory space queue according to the size of the new target messages.
Optionally, the releasing each subspace in the circular memory space queue according to the size of the new target message includes: determining the release quantity of the subspace according to the size of the new target message; and sequentially releasing the subspaces with the corresponding quantity from the initial position of the circulating memory space queue according to the release quantity.
Optionally, the method further comprises: and acquiring the time of the target message stored in the storage position, and releasing the subspace corresponding to the storage position when the time exceeds a preset time threshold.
According to a second aspect, an embodiment of the present invention provides an apparatus for allocating memory space, including: the determining module is used for determining a target memory space and the initial position and the tail position of the target memory space according to a target message queue; the building module is used for building a head-to-tail connected circulating memory space queue according to the initial position and the tail position of the target memory space, and the circulating memory space queue comprises a plurality of subspaces; and the distribution module is used for sequentially distributing the subspaces in the circulating memory space queue for the target message according to the size of the target message.
According to a third aspect, an embodiment of the present invention provides a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to cause the at least one processor to perform the steps of the method for allocating memory space according to the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for allocating memory space according to the first aspect or any one of the implementation manners of the first aspect.
The technical scheme of the invention has the following advantages:
the invention provides a memory space allocation method, a memory space allocation device and computer equipment, wherein the memory space allocation method comprises the following steps: determining a target memory space and an initial position and a tail position of the target memory space according to the target message queue; according to the initial position and the tail position of the target memory space, a circulating memory space queue connected end to end is constructed, wherein the circulating memory space queue comprises a plurality of subspaces; and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message.
By implementing the method and the device, the problem that the real-time requirement of an automatic driving system cannot be met in the related technology is solved. By combining the constructed circular memory space queue, the new message can be directly stored in a section of memory at the tail end of the memory where the previous message is located, the transmitted data is stored in the memory space in time, the data storage requirement of the automatic driving system is responded in real time, the memory space fragments can be fully utilized, and the fragmentation of the system memory is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a specific example of a method for allocating memory space according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a circular memory space queue in a method for allocating memory space according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific example of a method for allocating memory space according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a specific example of an apparatus for allocating memory space according to an embodiment of the present invention;
FIG. 5 is a diagram of an exemplary computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the existing automatic driving technology, an automatic driving system mainly includes: the system comprises sensor data acquisition nodes, sensing nodes, planning nodes, decision nodes, control nodes and execution nodes, wherein the nodes complete sensing algorithms, decision algorithms and control algorithms on the premise that bottom-layer data are stored and transmitted. When data is transmitted between the sensor data acquisition node and the next algorithm unit and between every two algorithm nodes, two steps of data caching and data publishing are required.
Specifically, the autopilot function module directly applies for a certain size of memory from the operating system, the operating system finds a suitable continuous storage space in the physical memory and returns the continuous storage space to the user, the user releases the memory after use, that is, the user triggers the memory release operation, and the operating system is responsible for completing the recovery of the memory. However, the message issued by each functional module is of indefinite length, and the memory cannot be divided evenly, so how to allocate the best matching space for the message is a big problem in memory communication implementation.
Two allocation methods exist in the existing memory allocation technology: static memory allocation and dynamic memory allocation. (1) Static memory allocation, wherein a user determines the maximum size of each message in advance, and then the maximum size is used as the size of a storage queue unit, when allocating memory, data storage can be performed according to the idle flag of each allocation unit, and at this time, any allocation unit can contain at least one piece of message data, which is suitable for a large-memory system. (2) The dynamic memory allocation is to divide the shared memory into a plurality of storage units in advance, and form a data storage space by a plurality of continuous small storage units according to the actual size of the message.
Based on the above problems, embodiments of the present invention provide a memory space allocation method, an apparatus, and a computer device, which can be applied to an automatic driving operation system, and implement memory allocation on a vehicle-mounted control platform for information transmitted by a vehicle chassis and a vehicle body sensor in order to meet a real-time requirement of the automatic driving system.
An embodiment of the present invention provides a method for allocating a memory space, as shown in fig. 1, the method includes:
step S11: determining a target memory space and an initial position and a tail position of the target memory space according to the target message queue; in this embodiment, the target message queue may be a series of message data determined by the target process, that is, a series of messages, for example, when the vehicle-mounted sensor acquires a plurality of items of data (for example, speed information, vehicle position information, and the like) of the vehicle, the target message queue may be generated, where the target message queue includes speed information of the autonomous vehicle at each time; the target memory space may be a segment of memory space allocated by an operating system of the autonomous vehicle in the physical memory space.
Specifically, according to the size of the target message queue, the operating system allocates a segment of storage space for the target message queue in the physical memory to store multiple items of data information in the target message queue, and first, an initial position (a first position) and a last position of the target memory space allocated by the operating system need to be determined, that is, address information of the first position and address information of the last position of the target memory space are determined.
Step S12: according to the initial position and the tail position of the target memory space, a circulating memory space queue connected end to end is constructed, wherein the circulating memory space queue comprises a plurality of subspaces; in this embodiment, the cyclic memory space queue may be a segment of circular memory space, i.e., a circular buffer, which connects the initial position and the tail position of the target memory space, and then the cyclic memory space queue is uniformly divided into a plurality of subspaces. Specifically, as shown in fig. 2, the total storage capacity of the circular memory space queue may be 8MB, at this time, 1MB may be used as a boundary to divide the circular memory space queue into 8 subspaces, a subspace corresponding to an initial position may be denoted by 0, labels of adjacent next subspaces are sequentially 1, 2, 3, 4, 5, and 6, until a subspace corresponding to a tail position, the label may be 7, the subspace denoted by 7 is connected to the subspace denoted by 0, Front may point to the initial position of the circular memory space queue, and real may point to the subspace storing the tail position of the message at this time, as shown in fig. 2, when real points to the subspace denoted by 4, it is described that target messages are stored in the subspaces denoted by 0 to 4 at this time.
Step S13: and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message. In this embodiment, the target message queue includes a plurality of target messages, and sub-spaces are sequentially allocated to the target messages in the circular memory space queue according to the size of the target message. Specifically, the target message queue may include three target messages, each having a length of 1MB, 0.5MB, and 1.8MB, in which case, a subspace labeled 0 is allocated to a target message having a length of 1MB, a subspace labeled 1 is allocated to a target message having a length of 0.5MB, and subspaces labeled 2 and 3 are allocated to a target message having a length of 1.8 MB.
The invention provides a memory space allocation method, which comprises the following steps: determining a target memory space and an initial position and a tail position of the target memory space according to the target message queue; according to the initial position and the tail position of the target memory space, a circulating memory space queue connected end to end is constructed, wherein the circulating memory space queue comprises a plurality of subspaces; and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message.
By implementing the method and the device, the problem that the real-time requirement of an automatic driving system cannot be met in the related technology is solved. By combining the constructed circular memory space queue, the new message can be directly stored in a section of memory at the tail end of the memory where the previous message is located, the transmitted data is stored in the memory space in time, the data storage requirement of the automatic driving system is responded in real time, the memory space fragments can be fully utilized, and the fragmentation of the system memory is reduced.
As an alternative embodiment of the present invention, as shown in fig. 3, the step S13, sequentially allocating subspaces in the circular memory space queue to the target message according to the size of the target message, includes:
step S131: determining the occupied quantity of the subspaces according to the size of each target message; in this embodiment, the number of subspaces that each target message should occupy is determined according to the length of each target message in the target message queue. Specifically, the target message queue may include three target messages, where a length of the first target message is 1MB, a length of the second target message is 0.5MB, and a length of the third target message is 1.8MB, and at this time, an occupied number of subspaces of the first target message is 1, an occupied number of subspaces of the second target message is 1, and an occupied number of subspaces of the third target message is 2.
Step S132: and determining the storage positions of the target messages in the circulating memory space queue according to the occupied quantity, wherein the storage positions comprise at least one subspace, and the storage positions occupied by the target messages sequentially stored in the circulating memory space queue are continuous. In this embodiment, the size of the cyclic memory space queue may be 8MB, at this time, the cyclic memory space queue may be uniformly divided into eight subspaces, the labels are 0 to 7, respectively, and the storage location of the first target message may be the subspace labeled as 0; the storage location of the second targeted message may be a subspace labeled 1; the storage location of the third targeted message may be a subspace numbered 2-3, the occupied subspaces of the targeted messages being contiguous.
In an alternative embodiment, when the storage location of the target message includes a plurality of subspaces, the plurality of subspaces included in the storage location of the target message is contiguous. In this embodiment, when the target message needs to occupy multiple subspaces in the circular memory space queue, for example, when the fourth target message is 2.8MB, the occupied number of the subspaces is 3, at this time, the storage position of the fourth target message in the circular memory space queue may be a subspace labeled as 4-6, that is, the four target messages occupy three consecutive subspaces, that is, the subspace labeled as 4-6, in the circular memory space queue.
As an alternative embodiment of the invention, the method further comprises:
and when the size of the residual subspace in the circulating memory space queue is smaller than the size of the newly acquired message, releasing each subspace in the circulating memory space queue according to the sizes of the residual subspace and the new message. In this embodiment, when the remaining capacity of the circular memory space queue cannot meet the storage requirement of the newly arrived target message, the subspace in which other target messages are already stored may be released according to the size of the newly arrived target message and the number of remaining subspaces of the current circular memory space queue, so as to meet the storage requirement of the newly arrived target message.
As an optional embodiment of the present invention, releasing each subspace in the circular memory space queue according to the remaining subspaces and the size of the new message includes:
determining the release quantity of the subspace according to the difference value of the sizes of the residual subspace and the new message; and sequentially releasing the subspaces with the corresponding quantity in the circulating memory space queue according to the release quantity.
In this embodiment, when the target message queue includes five target messages, wherein the length of the first target message is 1MB, the length of the second target message is 0.5MB, the length of the third target message is 1.8MB, the length of the fourth target message is 2.8MB, and the length of the fifth target message may be 3.2MB, at this time, the target messages are sequentially stored in the circular memory space queue, the first four target messages are sequentially stored in the subspaces labeled as 0-6, when the fifth target message arrives, the number of the remaining subspaces of the circular memory space queue is 1, and the storage requirement of the length of 3.2MB cannot be met, at this time, the number of the occupied subspaces of the fifth target message may be determined to be 4 according to the size of the fifth target message, that is, the number of the subspaces required to be released at this time is 3, specifically, the subspaces labeled as 0-2 are released, for storing the fifth targeted message.
In an optional embodiment, when writing the target message into each subspace in the circular memory space queue, the target message includes a timestamp, when a newly arrived message needs the memory, only the time at this time needs to be compared with the previous timestamp, if the time exceeds a specified preset time threshold (for example, 5ms), it may be determined that the life cycle of the previous message has ended, the memory space at this location is released, if the time does not exceed the preset time threshold, the memory space at this location may be skipped, and then it is determined whether the following memory space can be released until a memory space meeting the size of the message is found.
As an alternative embodiment of the invention, the method further comprises:
and when the target messages are stored in the subspaces in the circulating memory space queue and new target messages are acquired again, releasing each subspace in the circulating memory space queue according to the size of the new target messages. In this embodiment, when each subspace in the circular memory space queue stores a target message and a new target message arrives at the time, the subspace in which other target messages have been stored needs to be released, and the specific subspace release number may be determined according to the length of the newly arrived target message.
Specifically, when the length of the newly arrived message is 2.8MB, the number of subspaces to be released may be 3, that is, at this time, the subspaces labeled 0-2 in the circular memory space queue need to be released, and then used for storing the newly arrived target message.
As an optional embodiment of the present invention, releasing each subspace in the circular memory space queue according to the size of the new target message includes:
determining the release quantity of the subspace according to the size of the new target message; and sequentially releasing the subspaces with the corresponding quantity from the initial position of the circular memory space queue according to the release quantity. In this embodiment, when the length of the newly arrived message is 2.8MB, the number of the subspaces to be released may be 3, that is, at this time, the subspaces labeled 0-2 in the circular memory space queue need to be released, and then the subspaces are used for storing the newly arrived target message, that is, when the circular memory space queue is full, at this time, a corresponding number of the subspaces need to be released at the initial position of the circular memory space queue according to the size of the newly arrived target message.
The memory space allocation method provided by the embodiment of the invention can timely eliminate memory fragments by combining the method that the occupied sub-space can be released in time, namely, the memory space of the circular buffer area can be recycled.
As an alternative embodiment of the invention, the method further comprises:
and acquiring the time of the target message stored in the storage position, and releasing the subspace corresponding to the storage position when the time exceeds a preset time threshold. In this embodiment, the preset time threshold may be an effective access time length of the memory determined according to an actual application scenario, and when the data stored in the memory space exceeds the effective access time length, the data stored in the memory space is invalid, that is, the data stored in the memory space has a certain effective storage time, and cannot be accessed beyond the effective storage time, that is, the memory space is released.
In the distribution process of the memory space of the circular memory space queue, because the messages are stored in the circular queue according to the sequence of arrival of the messages, when the circular buffer is distributed and the memory space occupied by the messages stored before needs to be released, the memory unit can be released by combining with a timeout mechanism.
In the automatic driving system related to the method of the embodiment, a shared memory mechanism is adopted during inter-process communication, so that higher real-time performance can be realized.
An embodiment of the present invention provides an apparatus for allocating memory space, as shown in fig. 4, including:
a determining module 21, configured to determine a target memory space and an initial position and a tail position of the target memory space according to the target message queue; the detailed implementation can be referred to the related description of step S11 in the above method embodiment.
The building module 22 is configured to build an end-to-end cyclic memory space queue according to an initial position and a tail position of the target memory space, where the cyclic memory space queue includes a plurality of subspaces; the detailed implementation can be referred to the related description of step S11 in the above method embodiment.
And the allocating module 23 is configured to sequentially allocate subspaces in the circular memory space queue to the target message according to the size of the target message. The detailed implementation can be referred to the related description of step S11 in the above method embodiment.
The invention provides a memory space distribution device, comprising: a determining module 21, configured to determine a target memory space and an initial position and a tail position of the target memory space according to the target message queue; the building module 22 is configured to build an end-to-end cyclic memory space queue according to an initial position and a tail position of the target memory space, where the cyclic memory space queue includes a plurality of subspaces; and the allocating module 23 is configured to sequentially allocate subspaces in the circular memory space queue to the target message according to the size of the target message.
By implementing the method and the device, the problem that the real-time requirement of an automatic driving system cannot be met in the related technology is solved. By combining the constructed circular memory space queue, the new message can be directly stored in a section of memory at the tail end of the memory where the previous message is located, the transmitted data is stored in the memory space in time, the data storage requirement of the automatic driving system is responded in real time, the memory space fragments can be fully utilized, and the fragmentation of the system memory is reduced.
An embodiment of the present invention further provides a computer device, as shown in fig. 5, the computer device may include a processor 31 and a memory 32, where the processor 31 and the memory 32 may be connected by a bus 30 or in another manner, and fig. 5 takes the connection by the bus 30 as an example.
The processor 31 may be a Central Processing Unit (CPU). The Processor 31 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 32 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the memory space allocation method in the embodiment of the present invention. The processor 31 executes various functional applications and data processing of the processor by running the non-transitory software programs, instructions and modules stored in the memory 32, that is, the method for allocating memory space in the above method embodiments is implemented.
The memory 32 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 31, and the like. Further, the memory 32 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 32 may optionally include memory located remotely from the processor 31, and these remote memories may be connected to the processor 31 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 32, and when executed by the processor 31, perform the memory space allocation method in the embodiments shown in fig. 1 and fig. 3.
The details of the computer device can be understood by referring to the corresponding related descriptions and effects in the embodiments shown in fig. 1 and fig. 3, and are not described herein again.
The embodiment of the present invention further provides a non-transitory computer readable medium, where the non-transitory computer readable storage medium stores a computer instruction, and the computer instruction is used to enable a computer to execute the method for allocating a Memory space described in any one of the above embodiments, where the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (11)

1. A method for allocating memory space, comprising:
determining a target memory space and an initial position and a tail position of the target memory space according to a target message queue;
constructing a circulating memory space queue connected end to end according to the initial position and the tail position of the target memory space, wherein the circulating memory space queue comprises a plurality of subspaces;
and sequentially distributing subspaces in the circular memory space queue for the target message according to the size of the target message.
2. The method of claim 1, wherein said sequentially allocating subspaces in the circular memory space queue for a target message according to a size of the target message comprises:
determining the occupied quantity of the subspaces according to the size of each target message;
and determining the storage position of each target message in the circulating memory space queue according to the occupied number, wherein the storage position comprises at least one subspace, and the storage positions occupied by each target message sequentially stored in the circulating memory space queue are continuous.
3. The method of claim 2, wherein when the storage location of the target message comprises a plurality of subspaces, the plurality of subspaces comprised by the storage location of the target message is contiguous.
4. The method of claim 2 or 3, further comprising:
and when the size of the residual subspace in the circulating memory space queue is smaller than the size of the newly acquired new message, releasing each subspace in the circulating memory space queue according to the sizes of the residual subspace and the new message.
5. The method of claim 4, wherein releasing each subspace in the circular memory space queue according to the size of the remaining subspace and new message comprises:
determining the release quantity of the subspace according to the difference value of the sizes of the residual subspace and the new message;
and sequentially releasing the subspaces with the corresponding quantity in the circulating memory space queue according to the release quantity.
6. The method of claim 2 or 3, further comprising:
and when the subspaces in the circulating memory space queue store the target messages and new target messages are acquired again, releasing each subspace in the circulating memory space queue according to the size of the new target messages.
7. The method of claim 6, wherein said releasing each subspace in the circular memory space queue according to the size of the new target message comprises:
determining the release quantity of the subspace according to the size of the new target message;
and sequentially releasing the subspaces with the corresponding quantity from the initial position of the circulating memory space queue according to the release quantity.
8. The method of claim 2 or 3, further comprising:
and acquiring the time of the target message stored in the storage position, and releasing the subspace corresponding to the storage position when the time exceeds a preset time threshold.
9. An apparatus for allocating memory space, comprising:
the determining module is used for determining a target memory space and the initial position and the tail position of the target memory space according to a target message queue;
the building module is used for building a head-to-tail connected circulating memory space queue according to the initial position and the tail position of the target memory space, and the circulating memory space queue comprises a plurality of subspaces;
and the distribution module is used for sequentially distributing the subspaces in the circulating memory space queue for the target message according to the size of the target message.
10. A computer device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the steps of the method of allocating memory space of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for allocating memory space according to any one of claims 1 to 8.
CN202110065041.9A 2021-01-18 2021-01-18 Memory space allocation method and device and computer equipment Pending CN112732448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110065041.9A CN112732448A (en) 2021-01-18 2021-01-18 Memory space allocation method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110065041.9A CN112732448A (en) 2021-01-18 2021-01-18 Memory space allocation method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN112732448A true CN112732448A (en) 2021-04-30

Family

ID=75592216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110065041.9A Pending CN112732448A (en) 2021-01-18 2021-01-18 Memory space allocation method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN112732448A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550124A (en) * 2015-12-09 2016-05-04 四川长虹电器股份有限公司 Information processing method and electronic device
CN107765991A (en) * 2016-08-19 2018-03-06 北京百度网讯科技有限公司 Method and apparatus for transmitting message
US10176886B1 (en) * 2017-07-07 2019-01-08 Seagate Technology Llc Multi-level data block error detection code
CN109753479A (en) * 2018-12-28 2019-05-14 杭州迪普科技股份有限公司 A kind of data distributing method and device, equipment, medium
CN110209493A (en) * 2019-04-11 2019-09-06 腾讯科技(深圳)有限公司 EMS memory management process, device, electronic equipment and storage medium
CN110704335A (en) * 2019-09-03 2020-01-17 苏州浪潮智能科技有限公司 Data reading and writing method and device based on asynchronous ring buffer
CN111404842A (en) * 2019-12-11 2020-07-10 杭州海康威视***技术有限公司 Data transmission method, device and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550124A (en) * 2015-12-09 2016-05-04 四川长虹电器股份有限公司 Information processing method and electronic device
CN107765991A (en) * 2016-08-19 2018-03-06 北京百度网讯科技有限公司 Method and apparatus for transmitting message
US10176886B1 (en) * 2017-07-07 2019-01-08 Seagate Technology Llc Multi-level data block error detection code
CN109753479A (en) * 2018-12-28 2019-05-14 杭州迪普科技股份有限公司 A kind of data distributing method and device, equipment, medium
CN110209493A (en) * 2019-04-11 2019-09-06 腾讯科技(深圳)有限公司 EMS memory management process, device, electronic equipment and storage medium
CN110704335A (en) * 2019-09-03 2020-01-17 苏州浪潮智能科技有限公司 Data reading and writing method and device based on asynchronous ring buffer
CN111404842A (en) * 2019-12-11 2020-07-10 杭州海康威视***技术有限公司 Data transmission method, device and computer storage medium

Similar Documents

Publication Publication Date Title
CN107025205B (en) Method and equipment for training model in distributed system
US12001681B2 (en) Distributed storage system and data processing method
US20180341411A1 (en) Intelligent layout of composite data structures in tiered storage
US9229751B2 (en) Apparatus and method for managing virtual memory
CN106959893B (en) Accelerator, memory management method for accelerator and data processing system
EP4137956A1 (en) On-chip cache device, on-chip cache read/write method, and computer readable medium
TWI752415B (en) Data transmission method and device in TEE system
US11954528B2 (en) Technologies for dynamically sharing remote resources across remote computing nodes
US20180150417A1 (en) Deallocation of memory buffer in multiprocessor systems
CN112598565B (en) Service operation method and device based on accelerator card, electronic equipment and storage medium
DE10219621A1 (en) Pipeline controller for computer system, has resource control logic that identifies availability of resources and notifies flow control logic to adjust resource utilization value
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN104866528A (en) Multi-platform data acquisition method and system
KR20220082917A (en) Migrating user interface-based pages to improve performance
CN104965793B (en) A kind of cloud storage data node device
WO2022135160A1 (en) Releasing method and releasing system for buffer space, and electronic device and storage medium
CN112732448A (en) Memory space allocation method and device and computer equipment
EP3188026B1 (en) Memory resource management method and apparatus
CN109062781B (en) Message buffer management method and device
CN114298294B (en) Neural network memory optimization method and device based on hardware accelerator
CN113535597B (en) Memory management method, memory management unit and Internet of things equipment
US11003578B2 (en) Method and system for parallel mark processing
EP3696674A1 (en) Triggered operations for collective communication
CN105637483B (en) Thread migration method, device and system
CN113093994A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210430

RJ01 Rejection of invention patent application after publication