CN113791910A - Memory allocation method, memory allocation device, electronic equipment and readable storage medium - Google Patents

Memory allocation method, memory allocation device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113791910A
CN113791910A CN202111108224.0A CN202111108224A CN113791910A CN 113791910 A CN113791910 A CN 113791910A CN 202111108224 A CN202111108224 A CN 202111108224A CN 113791910 A CN113791910 A CN 113791910A
Authority
CN
China
Prior art keywords
memory
memory block
multiplexing
node
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111108224.0A
Other languages
Chinese (zh)
Inventor
冯浩恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111108224.0A priority Critical patent/CN113791910A/en
Publication of CN113791910A publication Critical patent/CN113791910A/en
Priority to PCT/CN2022/119685 priority patent/WO2023045879A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Memory System (AREA)

Abstract

The application discloses a memory allocation method, a memory allocation device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. The memory allocation method comprises the following steps: acquiring global information of a neural network; obtaining a memory block multiplexing strategy of the neural network based on the global information; and determining a memory allocation strategy of the neural network based on the memory block multiplexing strategy.

Description

Memory allocation method, memory allocation device, electronic equipment and readable storage medium
Technical Field
The present application belongs to the field of communications technologies, and in particular, to a memory allocation method, a memory allocation apparatus, an electronic device, and a readable storage medium.
Background
With the rapid development of artificial intelligence and big data, in order to give consideration to user experience and privacy protection, more and more deep neural network algorithms are deployed on electronic equipment. The forward reasoning calculation of the deep neural network algorithm needs to apply for a large amount of running memory to ensure the correctness of intermediate calculation, and the limitation of the running memory on the electronic equipment is a great factor which hinders the design of the mobile terminal neural network.
Disclosure of Invention
An object of the embodiments of the present application is to provide a memory allocation method, a memory allocation apparatus, an electronic device, and a readable storage medium, which can solve the problem of limitation on running a memory of a deep neural network algorithm in the related art.
In a first aspect, an embodiment of the present application provides a memory allocation method, where the memory allocation method includes:
acquiring global information of a neural network;
obtaining a memory block multiplexing strategy of the neural network based on the global information;
and determining a memory allocation strategy of the neural network based on the memory block multiplexing strategy.
In a second aspect, an embodiment of the present application provides a memory allocation apparatus, including:
the acquisition module is used for acquiring global information of the neural network;
the allocation module is used for acquiring a memory block multiplexing strategy of the neural network based on the global information;
and the determining module is used for determining the memory allocation strategy of the neural network based on the memory block multiplexing strategy.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, an implementation scheme of a neural network memory multiplexing policy is provided, and specifically, a memory block multiplexing policy is determined for a node of a neural network according to global information of the neural network, so that a memory is allocated to the node of the neural network according to the memory block multiplexing policy when the neural network is used. By the mode, the reuse rate of the memory of the electronic equipment can be improved, and the total operation memory required by the neural network forward reasoning is reduced, so that the complexity of the neural network of the electronic equipment, the diversity of the operation platform and the number of the parallel neural networks are enhanced.
Drawings
Fig. 1 is a schematic flow chart illustrating a memory allocation method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a memory allocation method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram illustrating a memory allocation method according to an embodiment of the present application;
FIG. 4 is a third exemplary diagram illustrating a memory allocation method according to an embodiment of the present application;
FIG. 5 is a schematic block diagram of a memory allocation apparatus according to an embodiment of the present application;
FIG. 6 is one of the schematic block diagrams of an electronic device of an embodiment of the present application;
fig. 7 is a second schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The memory allocation method, the memory allocation apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides a memory allocation method, as shown in fig. 1, the memory allocation method includes:
step 102, acquiring global information of a neural network;
104, acquiring a memory block multiplexing strategy of the neural network based on the global information;
and step 106, determining a memory allocation strategy of the neural network based on the memory block multiplexing strategy.
In this embodiment, an implementation scheme of a neural network memory multiplexing policy is provided, and specifically, a memory block multiplexing policy is determined for a node of a neural network according to global information of the neural network, so that a memory is allocated to the node of the neural network according to the memory block multiplexing policy when the neural network is used.
By the mode, the reuse rate of the memory of the electronic equipment can be improved, and the total operation memory required by the neural network forward reasoning is reduced, so that the complexity of the neural network of the electronic equipment, the diversity of the operation platform and the number of the parallel neural networks are enhanced.
Further, in an embodiment of the present application, the global information includes a required memory block of a node of the neural network; the method for obtaining the memory block multiplexing strategy of the neural network based on the global information comprises the following steps: sequencing the nodes of the neural network according to the sequence of the required memory blocks from large to small; allocating memory blocks for the nodes of the neural network according to the sequencing sequence of the nodes, and generating at least one memory block multiplexing strategy; determining a memory allocation strategy of a neural network based on a memory block multiplexing strategy, comprising: and taking the first target multiplexing strategy as a memory allocation strategy of the neural network, wherein the first target multiplexing strategy is a memory block multiplexing strategy of which the sum of the memory blocks is less than or equal to a first preset threshold value.
In this embodiment, all nodes of the neural network of the electronic device are sorted according to the memory block required by each node of the neural network in descending order. And then allocating memory blocks for the nodes according to the sequence from front to back, namely preferentially multiplexing larger memory blocks, and reducing the application of the total operation memory of the neural network.
And obtaining at least one memory allocation mode by allocating memory blocks for the nodes, and further outputting the memory allocation mode of which the total memory is less than or equal to a first preset threshold value. For example, 1000 memory allocation manners are obtained, the memory allocation manners are sorted according to the total memory size, and the allocation manner with the smallest total memory allocation is selected as the optimal memory multiplexing policy of the neural network, so that when the neural network is executed, memory blocks are allocated to each node according to the optimal memory multiplexing policy.
By the implementation scheme of the neural network memory multiplexing strategy, the multiplexing rate of the electronic equipment memory can be improved, and the total operation memory required by the neural network forward reasoning is reduced.
It should be noted that, the memory multiplexing performed by the neural network on the hardware of the Central Processing Unit (CPU) and the Digital Signal Processing (DSP) may also be further compressed by the pointer offset.
Further, in an embodiment of the present application, allocating memory blocks to nodes of a neural network according to a sorting order of the nodes, and generating at least one memory block multiplexing policy includes: dividing nodes of the neural network into N node groups according to the sorting sequence of the nodes; allocating memory blocks to the nodes of each node group, and generating at least one memory block multiplexing strategy; wherein N is a positive integer greater than or equal to 4.
In this embodiment, all nodes of the neural network are divided into N node groups according to the descending order of the required memory blocks, each node group includes at least one node, and the number of nodes in each node group may be equal or unequal. For example, the neural network includes 20 nodes, the first 6 nodes are divided into the 1 st node group, the 7 th to 10 th nodes are divided into the 2 nd node group, the 11 th to 14 th nodes are divided into the 3 rd node group, and the remaining nodes are divided into the 4 th node group. And then allocating memory blocks for the nodes of each node group, generating at least one memory block multiplexing strategy of the neural network, and finally selecting the multiplexing strategy with the minimum total memory allocation as the optimal memory allocation strategy of the neural network.
According to the method and the device, the multiplexing possibility of various memory blocks is cut off (namely, grouped) and deeply searched, so that the multiplexing strategy with the minimum total operating memory is selected, and the compression optimization on the operating memory of the neural network with any structure can be realized.
Further, in an embodiment of the present application, allocating a memory block to a node of each node group, and generating at least one memory block multiplexing policy includes: determining at least one first multiplexing memory block of an ith node in a 1 st node group, allocating the first multiplexing memory block for the ith node, and generating at least one first memory block multiplexing strategy; determining a first non-multiplexing memory block of an ith node, allocating the first non-multiplexing memory block for the ith node, and generating at least one second memory block multiplexing strategy; taking the second target multiplexing strategy as a memory block multiplexing strategy; wherein i is greater than or equal to 1 and less than or equal to O, O is the total number of nodes of the 1 st node group, and the second target multiplexing policy is a first memory block multiplexing policy or a second memory block multiplexing policy in which the sum of the memory blocks is less than or equal to a second preset threshold.
In this embodiment, the memory block multiplexing strategy is determined by using permutation and combination search, and the permutation and combination search similar to tree structure search may explore all multiplexing possibilities.
Specifically, for any node in the 1 st node group, at least one first multiplexed memory block of the node is determined, the first multiplexed memory block is allocated to the node, at least one first memory block multiplexing policy is generated, a first non-multiplexed memory block of the node is determined, the first non-multiplexed memory block is allocated to the node, and at least one second memory block multiplexing policy is generated. At least one first memory block multiplexing strategy and at least one second memory block multiplexing strategy are arranged according to the sequence from large to small of the total memory block sum, and the memory block multiplexing strategy of which the total memory (namely the total memory block sum) is smaller than or equal to a second preset threshold value is taken as the memory block multiplexing strategy of the node of the 1 st node group of the neural network.
For example, the 1 st node group includes nodes of the first 6 maximum memory blocks in the neural network, and when traversing the nodes of the first 6 maximum memory blocks and searching for a memory block with a size that is reusable and without connection conflict, a non-multiplexing situation may be considered. Specifically, as shown in fig. 2, the first node 202 and the second node 204 represent traversed nodes, the number 36 in the first node 202 represents the memory size of 36 units, the number 2 represents the position in the neural network, the number 20 in the second node 204 represents the memory size of 20 units, and the number 0 represents the position in the neural network. When traversing to the first node 202, the reusable memory block 206 (the memory size of the memory block 206 is 40 units) is screened out first, and a branch is split to consider the multiplexing situation of the following memory.
Meanwhile, a non-multiplexing situation is also considered, that is, the memory block 206 is not directly multiplexed, but the memory block 208 is allocated to the first node 202 separately (the memory size of the memory block 208 is 36 units), and a branch is split to consider the memory multiplexing policy of other nodes.
Besides non-multiplexing, when the current node faces a plurality of reusable memory blocks, different branches are also split respectively to consider the following memory multiplexing strategies. Specifically, as shown in FIG. 2, the branch of the second node 204 on the left has three types of reusable memory chunks: memory block 210, memory block 212, and memory block 210 (memory size 20 units, 28 units, and 36 units, respectively), all the memory block selections are traversed at this time.
As the number of traversal nodes increases, more branches are generated, and the branches grow exponentially, but since only the nodes of the first 6 largest memory blocks are searched, the search amount at this stage will not exceed 720 times. After all the memory multiplexing strategies for the nodes of the first 6 maximum memory blocks are obtained, the candidate strategies are sorted according to the total memory size, and the strategy with the minimum total memory of the first 10 maximum memory blocks is selected to enter the next round of search.
By the mode, the memory block multiplexing strategy is determined by using a permutation and combination search method, the memory blocks are preferentially allocated to the nodes with larger memory block requirements of the neural network, and the application of the total operation memory of the neural network is reduced.
Further, in an embodiment of the present application, allocating a memory block to a node of each node group, and generating at least one memory block multiplexing policy includes: determining at least one second multiplexing memory block of a jth node in the mth node group based on the memory block multiplexing strategy determined by the M-1 th node group, allocating the second multiplexing memory block to the jth node, and generating at least one third memory block multiplexing strategy; determining a second non-multiplexing memory block of a jth node, allocating the second non-multiplexing memory block to the ith node, and generating at least one fourth memory block multiplexing strategy; taking the third target multiplexing strategy as a memory block multiplexing strategy; wherein M is greater than 1 and less than N, j is greater than or equal to 1 and less than or equal to P, P is the total number of nodes of the mth node group, and the third target multiplexing policy is a third memory chunk multiplexing policy or a fourth memory chunk multiplexing policy in which the sum of the memory chunks is less than or equal to a third preset threshold.
In this embodiment, after determining the memory block multiplexing policy of the M-1 th node group, for any node in the M-th node group, at least one second multiplexed memory block of the node is determined, a second multiplexed memory block is allocated to the node, at least one third memory block multiplexing policy is generated, a second non-multiplexed memory block of the node is determined, a second non-multiplexed memory block is allocated to the node, and at least one fourth memory block multiplexing policy is generated. And arranging at least one third memory allocation mode and at least one fourth memory allocation mode according to the sequence of the memory block sum from large to small, and taking a memory block multiplexing strategy that the total memory (namely the memory block sum) is less than or equal to a third preset threshold value as a memory block multiplexing strategy of the nodes of the first M node groups of the neural network.
Illustratively, the neural network includes 20 nodes, the first 6 nodes are divided into a 1 st node group, the 7 th to 10 th nodes are divided into a 2 nd node group, the 11 th to 14 th nodes are divided into a 3 rd node group, and the remaining nodes are divided into a 4 th node group. After the first 10 optimal multiplexing strategies for the nodes of the first 6 maximum memory blocks are determined, the nodes of the 7 th to 10 th maximum memory blocks are then subjected to multiplexing memory search based on the 10 optimal multiplexing strategies. Specifically, the strategy of permutation and combination search is still adopted for searching, and different branches are split to serve as different local memory multiplexing strategies, and the number of branches at this stage does not exceed 2401. And finally, corresponding extension branches can be obtained by the 10 multiplexing strategies obtained by the 1 st node group, and the multiplexing strategy with the minimum total memory allocation of the first 100 is selected as the input of the next step. And continuing to perform the same multiplexing search on the nodes of the 11 th to 14 th maximum memory blocks, and finally selecting the multiplexing strategy with the minimum total memory allocation of the first 1000 maximum memory blocks from the output multiplexing strategies as the final output.
By the mode, the memory block multiplexing strategy is determined by using a permutation and combination search method, the memory blocks are distributed to the nodes according to the size sequence of the required memory blocks, the application of the total running memory of the neural network is reduced, and the compression optimization on the running memory is obtained by the neural network.
Further, in one embodiment of the present application, the global information includes topology information of nodes of the neural network; allocating memory blocks to the nodes of each node group, and generating at least one memory block multiplexing strategy, including: based on the memory block multiplexing strategy determined by the (N-1) th node group, allocating memory blocks for the tth node according to the memory blocks of the first 2 nodes of the tth node in the Nth node group on the topological structure, and generating at least one fifth memory block multiplexing strategy; taking the fifth memory block multiplexing strategy as a memory block multiplexing strategy; wherein t is greater than or equal to 1 and less than or equal to Q, Q being the total number of nodes of the nth node group.
In this embodiment, after traversing the first N-1 node groups, due to the characteristic of the gradual compression feature of the deep neural network, the multiplexing allocation of the nodes to be traversed later will not cause too much influence on the total memory size, so that the traversal of the nodes later (i.e., the nodes in the nth node group) will not perform permutation and combination search any more, but select the reusable memory blocks by adopting a direct importance judgment manner. Specifically, after the memory block multiplexing policy of the (N-1) th node group is determined, to avoid a local node from generating a deadlock phenomenon, for any node in the last node group, a memory block is allocated to the node according to the memory blocks of the first 2 nodes of the node on the topology structure, so as to generate at least one fifth memory block multiplexing policy. And taking at least one fifth memory block multiplexing strategy as a memory block multiplexing strategy of the nodes of the N node groups of the neural network.
Illustratively, as exemplified above, the multiplexing strategy with the minimum total memory allocation of the first 1000 is obtained. When the multiplexing strategy with the minimum total memory allocation of the first 1000 is selected to continue the strategy allocation, a plurality of multiplexing strategies with the same total memory size are often encountered, and at this time, the multiplexing strategies need to be subjected to important sequencing.
As shown in fig. 3, node 302 at location 0 and node 304 at location 3 share the same memory block, node 306 at location 1 and node 308 at location 4 share the same memory block, and node 310 at location 2 uses one memory block. It should be noted that the large-scale number in a node represents the size of a memory unit required by the node, and the small-scale number represents the topological position of the node in the neural network. If the strategy shown in fig. 3 is adopted as the multiplexing strategy of the current neural network, memory with a size of 36+20+ 16-72 units needs to be allocated, because the topological positions of the nodes 302 and 304 cause a phenomenon of deadlock of local nodes, the number of reusable memory blocks is reduced, and the node 302 at the position 0 and the node 304 at the position 3 can only allocate memory separately. In practice, it is the best strategy that node 302 at location 0, node 310 at location 2, and node 308 at location 4 share the same block of memory, so that the total memory allocation is also compressed to 36+20 units or 56 units.
Therefore, when the multiplexing strategy with the minimum total memory allocation is selected to continue strategy allocation, if the multiplexing strategies with the same total memory size are met, the topological positions of all the nodes are judged, the strategy of cross-node memory multiplexing is selected as much as possible, the occurrence of deadlock is reduced, and more choices which are more suitable for multiplexing are provided for traversing other new nodes later.
It should be noted that the actual topological position of the node is determined by recording the front-back topological relation of each network node, and the problem of network inference calculation errors caused by multiplexing the wrong memory block is avoided by recording the front-back topological relation of each network node, so as to prepare for subsequently multiplexing the non-conflicting memory block. A cross node indicates two nodes separated by one node in between. For example, for the s-th node, the crossing nodes thereof include the s-2 nd node and the s +2 nd node. As shown in FIG. 4, node 304 and node 306 are crossover nodes, and node 308 and node 310 are crossover nodes.
In addition, the more cross-node memory allocation means that the importance of the current multiplexing strategy is higher, and the multiplexing strategy with the highest importance rank is selected from the multiplexing strategies with the same total memory size as the final output multiplexing strategy.
By the method, the reusable memory block is selected for the node with the later size of the required memory block in the importance judgment mode, the phenomenon of local node deadlock is avoided, and the accuracy of memory block reuse is improved.
It should be noted that, in the memory allocation method provided in the embodiment of the present application, the execution main body may be a memory allocation device, or a control module in the memory allocation device for executing the memory allocation method. In the embodiment of the present application, a memory allocation device executing a memory allocation method is taken as an example to describe the memory allocation device provided in the embodiment of the present application.
An embodiment of the present invention provides a memory allocation apparatus, as shown in fig. 5, the memory allocation apparatus 500 includes:
an obtaining module 502, configured to obtain global information of a neural network;
an allocating module 504, configured to obtain a memory block multiplexing policy of the neural network based on the global information;
a determining module 506, configured to determine a memory allocation policy of the neural network based on the memory block multiplexing policy.
In this embodiment, an implementation scheme of a neural network memory multiplexing policy is provided, and specifically, a memory block multiplexing policy is determined for a node of a neural network according to global information of the neural network, so that a memory is allocated to the node of the neural network according to the memory block multiplexing policy when the neural network is used. By the mode, the reuse rate of the memory of the electronic equipment can be improved, and the total operation memory required by the neural network forward reasoning is reduced, so that the complexity of the neural network of the electronic equipment, the diversity of the operation platform and the number of the parallel neural networks are enhanced.
Further, in an embodiment of the present application, the global information includes a required memory block of a node of the neural network; the memory allocation device comprises: the sorting module is used for sorting the nodes of the neural network according to the sequence of the required memory blocks from large to small; an allocating module 504, specifically configured to allocate memory blocks to nodes of a neural network according to a sorting order of the nodes, and generate at least one memory block multiplexing policy; the determining module 506 is specifically configured to use a first target multiplexing policy as a memory allocation policy of the neural network, where the first target multiplexing policy is a memory block multiplexing policy that a sum of the memory blocks is smaller than or equal to a first preset threshold.
In this embodiment, all nodes of the neural network of the electronic device are sorted according to the memory block required by each node of the neural network in descending order. And then allocating memory blocks for the nodes according to the sequence from front to back, namely preferentially multiplexing larger memory blocks, and reducing the application of the total operation memory of the neural network. And obtaining at least one memory allocation mode by allocating memory blocks for the nodes, and further outputting the memory allocation mode of which the total memory is less than or equal to a first preset threshold value. Through the implementation scheme of the neural network memory multiplexing strategy, the multiplexing rate of the electronic device memory can be improved, and the total operation memory required by the neural network forward reasoning is reduced, so that the complexity of the neural network of the electronic device, the diversity of the operation platform and the number of the parallel neural networks are enhanced.
Further, in an embodiment of the present application, the memory allocation apparatus 500 further includes: the dividing module is used for dividing the nodes of the neural network into N node groups according to the sorting sequence of the nodes; an allocating module 504, specifically configured to allocate a memory block to a node of each node group, and generate at least one memory block multiplexing policy; wherein N is a positive integer greater than or equal to 4.
Further, in an embodiment of the present application, the allocating module 504 is specifically configured to: determining at least one first multiplexing memory block of an ith node in a 1 st node group, allocating the first multiplexing memory block for the ith node, and generating at least one first memory block multiplexing strategy; determining a first non-multiplexing memory block of an ith node, allocating the first non-multiplexing memory block for the ith node, and generating at least one second memory block multiplexing strategy; taking the second target multiplexing strategy as a memory block multiplexing strategy; wherein i is greater than or equal to 1 and less than or equal to O, O is the total number of nodes of the 1 st node group, and the second target multiplexing policy is a first memory block multiplexing policy or a second memory block multiplexing policy in which the sum of the memory blocks is less than or equal to a second preset threshold.
Further, in an embodiment of the present application, the allocating module 504 is specifically configured to: determining at least one second multiplexing memory block of a jth node in the mth node group based on the memory block multiplexing strategy determined by the M-1 th node group, allocating the second multiplexing memory block to the jth node, and generating at least one third memory block multiplexing strategy; determining a second non-multiplexing memory block of a jth node, allocating the second non-multiplexing memory block to the ith node, and generating at least one fourth memory block multiplexing strategy; taking the third target multiplexing strategy as a memory block multiplexing strategy; wherein M is greater than 1 and less than N, j is greater than or equal to 1 and less than or equal to P, P is the total number of nodes of the mth node group, and the third target multiplexing policy is a third memory chunk multiplexing policy or a fourth memory chunk multiplexing policy in which the sum of the memory chunks is less than or equal to a third preset threshold.
Further, in an embodiment of the present application, the allocating module 504 is specifically configured to: based on the memory block multiplexing strategy determined by the (N-1) th node group, allocating memory blocks for the tth node according to the memory blocks of the first 2 nodes of the tth node in the Nth node group on the topological structure, and generating at least one fifth memory block multiplexing strategy; taking the fifth memory block multiplexing strategy as a memory block multiplexing strategy; wherein t is greater than or equal to 1 and less than or equal to Q, Q being the total number of nodes of the nth node group.
The memory allocation apparatus 500 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the Mobile electronic device may be a Mobile phone, a tablet Computer, a notebook Computer, a palm top Computer, an in-vehicle electronic device, a wearable device, an Ultra-Mobile Personal Computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-Mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (Personal Computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The memory allocation device 500 in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The memory allocation apparatus 500 provided in this embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 602, a memory 604, and a program or an instruction stored in the memory 604 and executable on the processor 602, where the program or the instruction is executed by the processor 602 to implement each process of the foregoing memory allocation method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio unit 702, network module 704, audio output unit 706, input unit 708, sensor 710, display unit 712, user input unit 714, interface unit 716, memory 718, and processor 720.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, which may be logically coupled to the processor 720 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein, the processor 720 is configured to: acquiring global information of a neural network; obtaining a memory block multiplexing strategy of the neural network based on the global information; and determining a memory allocation strategy of the neural network based on the memory block multiplexing strategy.
In this embodiment, an implementation scheme of a neural network memory multiplexing policy is provided, and specifically, a memory block multiplexing policy is determined for a node of a neural network according to global information of the neural network, so that a memory is allocated to the node of the neural network according to the memory block multiplexing policy when the neural network is used. By the mode, the reuse rate of the memory of the electronic equipment can be improved, and the total operation memory required by the neural network forward reasoning is reduced, so that the complexity of the neural network of the electronic equipment, the diversity of the operation platform and the number of the parallel neural networks are enhanced.
Further, in an embodiment of the present application, the global information includes a required memory block of a node of the neural network; processor 720 is specifically configured to: sequencing the nodes of the neural network according to the sequence of the required memory blocks from large to small; allocating memory blocks for the nodes of the neural network according to the sequencing sequence of the nodes, and generating at least one memory block multiplexing strategy; and taking the first target multiplexing strategy as a memory allocation strategy of the neural network, wherein the first target multiplexing strategy is a memory block multiplexing strategy of which the sum of the memory blocks is less than or equal to a first preset threshold value.
Further, in an embodiment of the present application, the processor 720 is specifically configured to: dividing nodes of the neural network into N node groups according to the sorting sequence of the nodes; allocating memory blocks to the nodes of each node group, and generating at least one memory block multiplexing strategy; wherein N is a positive integer greater than or equal to 4.
Further, in an embodiment of the present application, the processor 720 is specifically configured to: determining at least one first multiplexing memory block of an ith node in a 1 st node group, allocating the first multiplexing memory block for the ith node, and generating at least one first memory block multiplexing strategy; determining a first non-multiplexing memory block of an ith node, allocating the first non-multiplexing memory block for the ith node, and generating at least one second memory block multiplexing strategy; taking the second target multiplexing strategy as a memory block multiplexing strategy; wherein i is greater than or equal to 1 and less than or equal to O, O is the total number of nodes of the 1 st node group, and the second target multiplexing policy is a first memory block multiplexing policy or a second memory block multiplexing policy in which the sum of the memory blocks is less than or equal to a second preset threshold.
Further, in an embodiment of the present application, the processor 720 is specifically configured to: determining at least one second multiplexing memory block of a jth node in the mth node group based on the memory block multiplexing strategy determined by the M-1 th node group, allocating the second multiplexing memory block to the jth node, and generating at least one third memory block multiplexing strategy; determining a second non-multiplexing memory block of a jth node, allocating the second non-multiplexing memory block to the ith node, and generating at least one fourth memory block multiplexing strategy; taking the third target multiplexing strategy as a memory block multiplexing strategy; wherein M is greater than 1 and less than N, j is greater than or equal to 1 and less than or equal to P, P is the total number of nodes of the mth node group, and the third target multiplexing policy is a third memory chunk multiplexing policy or a fourth memory chunk multiplexing policy in which the sum of the memory chunks is less than or equal to a third preset threshold.
Further, in one embodiment of the present application, the global information further includes topology information of nodes of the neural network; processor 720 is specifically configured to: based on the memory block multiplexing strategy determined by the (N-1) th node group, allocating memory blocks for the tth node according to the memory blocks of the first 2 nodes of the tth node in the Nth node group on the topological structure, and generating at least one fifth memory block multiplexing strategy; taking the fifth memory block multiplexing strategy as a memory block multiplexing strategy; wherein t is greater than or equal to 1 and less than or equal to Q, Q being the total number of nodes of the nth node group.
It should be understood that, in the embodiment of the present application, the radio frequency unit 702 may be used for transceiving information or transceiving signals during a call, and in particular, receiving downlink data of a base station or sending uplink data to the base station. Radio frequency unit 702 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The network module 704 provides wireless broadband internet access to the user, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 706 may convert audio data received by the radio frequency unit 702 or the network module 704 or stored in the memory 718 into an audio signal and output as sound. Also, the audio output unit 706 may provide audio output related to a specific function performed by the electronic apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 706 includes a speaker, a buzzer, a receiver, and the like.
The input unit 708 is used to receive audio or video signals. The input Unit 708 may include a Graphics Processing Unit (GPU) 7082 and a microphone 7084, and the Graphics processor 7082 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 712, or stored in the memory 718 (or other storage medium), or transmitted via the radio frequency unit 702 or the network module 704. The microphone 7084 may receive sound and may be capable of processing the sound into audio data, and the processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 702 in the case of a phone call mode.
The electronic device 700 also includes at least one sensor 710, such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, a light sensor, a motion sensor, and others.
The display unit 712 is used to display information input by the user or information provided to the user. The display unit 712 may include a display panel 7122, and the display panel 7122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
The user input unit 714 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 714 includes a touch panel 7142 and other input devices 7144. Touch panel 7142, also referred to as a touch screen, may collect touch operations by a user on or near it. The touch panel 7142 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 720, receives a command from the processor 720, and executes the command. Other input devices 7144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 7142 may be overlaid on the display panel 7122, and when the touch panel 7142 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 720 to determine the type of the touch event, and then the processor 720 provides a corresponding visual output on the display panel 7122 according to the type of the touch event. The touch panel 7142 and the display panel 7122 may be provided as two separate components or may be integrated into one component.
The interface unit 716 is an interface through which an external device is connected to the electronic apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 716 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 700 or may be used to transmit data between the electronic apparatus 700 and the external device.
Memory 718 may be used to store software programs as well as various data. The memory 718 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Additionally, the memory 718 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Processor 720 performs various functions of electronic device 700 and processes data by executing or executing software programs and/or modules stored in memory 718, as well as invoking data stored in memory 718, thereby providing an overall monitoring of electronic device 700. Processor 720 may include one or more processing units; preferably, the processor 720 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the process of the memory allocation method embodiment is implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the memory allocation method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A method for allocating memory, comprising:
acquiring global information of a neural network;
obtaining a memory block multiplexing strategy of the neural network based on the global information;
and determining a memory allocation strategy of the neural network based on the memory block multiplexing strategy.
2. The memory allocation method according to claim 1, wherein the global information includes required memory blocks of the nodes of the neural network;
the obtaining the memory block multiplexing policy of the neural network based on the global information includes:
sequencing the nodes of the neural network according to the sequence of the required memory blocks from large to small;
allocating memory blocks to the nodes of the neural network according to the sorting sequence of the nodes, and generating at least one memory block multiplexing strategy;
the determining a memory allocation policy of the neural network based on the memory block multiplexing policy includes:
and taking a first target multiplexing strategy as a memory allocation strategy of the neural network, wherein the first target multiplexing strategy is a memory block multiplexing strategy in which the sum of the memory blocks is less than or equal to a first preset threshold value.
3. The memory allocation method according to claim 2, wherein the allocating memory blocks to the nodes of the neural network according to the sorting order of the nodes and generating at least one of the memory block multiplexing policies includes:
dividing the nodes of the neural network into N node groups according to the sorting sequence of the nodes;
allocating a memory block to each node of the node group, and generating at least one memory block multiplexing strategy;
wherein N is a positive integer greater than or equal to 4.
4. The memory allocation method according to claim 3, wherein the allocating memory blocks to the nodes in each node group and generating at least one of the memory block multiplexing policies includes:
determining at least one first multiplexing memory block of an ith node in a 1 st node group, allocating the first multiplexing memory block to the ith node, and generating at least one first memory block multiplexing strategy;
determining a first non-multiplexed memory block of the ith node, allocating the first non-multiplexed memory block to the ith node, and generating at least one second memory block multiplexing policy;
taking a second target multiplexing strategy as the memory block multiplexing strategy;
wherein i is greater than or equal to 1 and less than or equal to O, O is the total number of nodes of the 1 st node group, and the second target multiplexing policy is the first memory block multiplexing policy or the second memory block multiplexing policy in which the total of memory blocks is less than or equal to a second preset threshold.
5. The memory allocation method according to claim 3, wherein the allocating memory blocks to the nodes in each node group and generating at least one of the memory block multiplexing policies includes:
determining at least one second multiplexing memory block of a jth node in an mth node group based on the memory block multiplexing policy determined by the M-1 th node group, and allocating the second multiplexing memory block to the jth node to generate at least one third memory block multiplexing policy;
determining a second non-multiplexed memory block of the jth node, allocating the second non-multiplexed memory block to the ith node, and generating at least one fourth memory block multiplexing policy;
taking a third target multiplexing strategy as the memory block multiplexing strategy;
wherein M is greater than 1 and less than N, j is greater than or equal to 1 and less than or equal to P, P is the total number of nodes of the mth node group, and the third target multiplexing policy is the third memory chunk multiplexing policy or the fourth memory chunk multiplexing policy in which the total sum of memory chunks is less than or equal to a third preset threshold.
6. The memory allocation method according to any one of claims 3 to 5, wherein the global information further includes topology information of nodes of the neural network;
the allocating a memory block to a node of each node group and generating at least one memory block multiplexing policy includes:
based on the memory block multiplexing strategy determined by the (N-1) th node group, allocating memory blocks for the tth node according to the memory blocks of the first 2 nodes of the tth node in the Nth node group on the topological structure, and generating at least one fifth memory block multiplexing strategy;
taking the fifth memory block multiplexing policy as the memory block multiplexing policy;
wherein t is greater than or equal to 1 and less than or equal to Q, Q being the total number of nodes of the nth node group.
7. A memory allocation apparatus, comprising:
the acquisition module is used for acquiring global information of the neural network;
the allocation module is used for acquiring a memory block multiplexing strategy of the neural network based on the global information;
a determining module, configured to determine a memory allocation policy of the neural network based on the memory block multiplexing policy.
8. The memory allocation apparatus according to claim 7, wherein the global information includes required memory blocks of the nodes of the neural network; the memory allocation device comprises:
the sorting module is used for sorting the nodes of the neural network according to the descending order of the required memory blocks;
the allocating module is specifically configured to allocate memory blocks to nodes of the neural network according to the sorting order of the nodes, and generate at least one memory block multiplexing policy;
the determining module is specifically configured to use a first target multiplexing policy as a memory allocation policy of the neural network, where the first target multiplexing policy is a memory block multiplexing policy in which a sum of memory blocks is smaller than or equal to a first preset threshold.
9. The memory allocation device of claim 8, further comprising:
the dividing module is used for dividing the nodes of the neural network into N node groups according to the sorting sequence of the nodes;
the allocating module is specifically configured to allocate a memory block to a node of each node group, and generate at least one memory block multiplexing policy;
wherein N is a positive integer greater than or equal to 4.
10. The memory allocation device according to claim 9, wherein the allocation module is specifically configured to:
determining at least one first multiplexing memory block of an ith node in a 1 st node group, allocating the first multiplexing memory block to the ith node, and generating at least one first memory block multiplexing strategy;
determining a first non-multiplexed memory block of the ith node, allocating the first non-multiplexed memory block to the ith node, and generating at least one second memory block multiplexing policy;
taking a second target multiplexing strategy as the memory block multiplexing strategy;
wherein i is greater than or equal to 1 and less than or equal to O, O is the total number of nodes of the 1 st node group, and the second target multiplexing policy is the first memory block multiplexing policy or the second memory block multiplexing policy in which the total of memory blocks is less than or equal to a second preset threshold.
11. The memory allocation device according to claim 9, wherein the allocation module is specifically configured to:
determining at least one second multiplexing memory block of a jth node in an mth node group based on the memory block multiplexing policy determined by the M-1 th node group, and allocating the second multiplexing memory block to the jth node to generate at least one third memory block multiplexing policy;
determining a second non-multiplexed memory block of the jth node, allocating the second non-multiplexed memory block to the ith node, and generating at least one fourth memory block multiplexing policy;
taking a third target multiplexing strategy as the memory block multiplexing strategy;
wherein M is greater than 1 and less than N, j is greater than or equal to 1 and less than or equal to P, P is the total number of nodes of the mth node group, and the third target multiplexing policy is the third memory chunk multiplexing policy or the fourth memory chunk multiplexing policy in which the total sum of memory chunks is less than or equal to a third preset threshold.
12. The memory allocation device according to any one of claims 9 to 11, wherein the global information further comprises topology information of nodes of the neural network; the allocation module is specifically configured to:
based on the memory block multiplexing strategy determined by the (N-1) th node group, allocating memory blocks for the tth node according to the memory blocks of the first 2 nodes of the tth node in the Nth node group on the topological structure, and generating at least one fifth memory block multiplexing strategy;
taking the fifth memory block multiplexing policy as the memory block multiplexing policy;
wherein t is greater than or equal to 1 and less than or equal to Q, Q being the total number of nodes of the nth node group.
13. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the memory allocation method of any one of claims 1 to 6.
14. A readable storage medium on which a program or instructions are stored, which program or instructions, when executed by a processor, carry out the steps of the memory allocation method according to any one of claims 1 to 6.
CN202111108224.0A 2021-09-22 2021-09-22 Memory allocation method, memory allocation device, electronic equipment and readable storage medium Pending CN113791910A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111108224.0A CN113791910A (en) 2021-09-22 2021-09-22 Memory allocation method, memory allocation device, electronic equipment and readable storage medium
PCT/CN2022/119685 WO2023045879A1 (en) 2021-09-22 2022-09-19 Memory allocation method, memory allocation apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111108224.0A CN113791910A (en) 2021-09-22 2021-09-22 Memory allocation method, memory allocation device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113791910A true CN113791910A (en) 2021-12-14

Family

ID=78879132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111108224.0A Pending CN113791910A (en) 2021-09-22 2021-09-22 Memory allocation method, memory allocation device, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN113791910A (en)
WO (1) WO2023045879A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298294A (en) * 2021-12-28 2022-04-08 杭州雄迈集成电路技术股份有限公司 Neural network memory optimization method and device based on hardware accelerator
WO2023045879A1 (en) * 2021-09-22 2023-03-30 维沃移动通信有限公司 Memory allocation method, memory allocation apparatus, electronic device, and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597616B (en) * 2018-06-13 2022-07-29 华为技术有限公司 Memory allocation method and device for neural network
CN113127181B (en) * 2019-12-30 2024-02-20 杭州海康威视数字技术股份有限公司 Memory management method, device and storage medium
CN111814971B (en) * 2020-06-30 2022-08-05 杭州国芯科技股份有限公司 Memory allocation method of neural network
CN112256441B (en) * 2020-12-23 2021-05-04 上海齐感电子信息科技有限公司 Memory allocation method and device for neural network inference
CN113791910A (en) * 2021-09-22 2021-12-14 维沃移动通信有限公司 Memory allocation method, memory allocation device, electronic equipment and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045879A1 (en) * 2021-09-22 2023-03-30 维沃移动通信有限公司 Memory allocation method, memory allocation apparatus, electronic device, and readable storage medium
CN114298294A (en) * 2021-12-28 2022-04-08 杭州雄迈集成电路技术股份有限公司 Neural network memory optimization method and device based on hardware accelerator
CN114298294B (en) * 2021-12-28 2022-11-01 杭州雄迈集成电路技术股份有限公司 Neural network memory optimization method and device based on hardware accelerator

Also Published As

Publication number Publication date
WO2023045879A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN111368934B (en) Image recognition model training method, image recognition method and related device
CN106919918B (en) Face tracking method and device
CN111867129B (en) Physical random access channel transmission method, terminal and network side equipment
WO2023045879A1 (en) Memory allocation method, memory allocation apparatus, electronic device, and readable storage medium
CN110163367B (en) Terminal deployment method and device
CN114973351B (en) Face recognition method, device, equipment and storage medium
CN106502824A (en) Data back up method and Cloud Server
CN110633438A (en) News event processing method, terminal, server and storage medium
CN112947810A (en) Interface display method and device and electronic equipment
CN112650498B (en) Static library integration method and device, electronic equipment and storage medium
CN110378798B (en) Heterogeneous social network construction method, group recommendation method, device and equipment
CN102893583B (en) Determine that the best be associated with recovery plan transmits condition in a communication network
CN115240250A (en) Model training method and device, computer equipment and readable storage medium
CN117527804A (en) Network request information forwarding method and device
CN114338770A (en) Cross-block-chain data processing method and device, storage medium and terminal equipment
CN112948763B (en) Piece quantity prediction method and device, electronic equipment and storage medium
CN110503189B (en) Data processing method and device
US10783020B2 (en) Method for invoking component, and terminal
CN112015973A (en) Relation reasoning method and terminal for heterogeneous network
CN115333928B (en) Network early warning method and device, electronic equipment and storage medium
CN116450808B (en) Data processing method and device and storage medium
CN113778679B (en) Resource scheduling method, resource scheduling device, electronic device and readable storage medium
CN107169353B (en) Abnormal file identification method and device
CN110784444B (en) Method for processing nested data stream and related equipment
CN110139339B (en) Communication method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination