CN114489952A - Queue distribution method and device - Google Patents

Queue distribution method and device Download PDF

Info

Publication number
CN114489952A
CN114489952A CN202210107695.8A CN202210107695A CN114489952A CN 114489952 A CN114489952 A CN 114489952A CN 202210107695 A CN202210107695 A CN 202210107695A CN 114489952 A CN114489952 A CN 114489952A
Authority
CN
China
Prior art keywords
queue
queues
addresses
linked list
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210107695.8A
Other languages
Chinese (zh)
Inventor
王建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunbao Intelligent Co ltd
Original Assignee
Shenzhen Yunbao Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunbao Intelligent Co ltd filed Critical Shenzhen Yunbao Intelligent Co ltd
Priority to CN202210107695.8A priority Critical patent/CN114489952A/en
Publication of CN114489952A publication Critical patent/CN114489952A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a queue distribution method and a device, wherein the queue distribution method comprises the following steps: constructing a data structure unit, wherein the data structure unit stores a plurality of different queue numbers S, the queue numbers S are the queue numbers to be applied by the virtual equipment, and the data structure unit stores at least one group of addresses of continuous S queues corresponding to each queue number S; receiving a queue application request of the virtual device, if the number of queues to be applied is matched with any queue number S stored in the data structure unit, allocating the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S to the queue application request, and marking the addresses of the group of continuous S queues allocated to the application request as occupied; receiving a queue release request of the virtual device, and marking the addresses of a group of continuous S queues to be released by the release request as free. The method and the device can reduce the formation of cavities in the queue space and improve the utilization rate of the queue space.

Description

Queue distribution method and device
Technical Field
The present application relates to the field of network data processing, and in particular, to a queue allocation method and apparatus.
Background
In a cloud computing virtualization scene, Virtio is a universal semi-virtualization I/O framework through which Hypervisor simulates a series of virtualization devices. In cloud computing, in order to enable Guest OSs of multiple virtual machines to run independently on a Host OS of the same physical machine, a virtualization layer, called Hypervisor, is usually added to the Host OS of the physical machine to implement the Guest OS.
The Virtio framework consists essentially of three parts: front-end drive (front-end), back-end device (back-end), and virtqueue. Virtio implements the I/O mechanism using virtuous, each of which is a queue.
With the evolution of cloud computing virtualization technology, some intelligent network cards for unloading Virtio back-end by hardware are appeared at present. These smart cards are usually implemented by using an FPGA (Field Programmable Gate Array) chip or an ASIC (Application Specific Integrated Circuit) chip, multiple network card devices are virtualized on the physical machine side by SR-IOV (Single Root I/O Virtualization), and a certain number of virtual queue inside the chip bears the network card devices exposed to the physical machine side.
If an intelligent network card can virtualize M network card devices at a physical machine side, and N queues are implemented inside an intelligent network card chip, where M and N are both positive integers, M and N are both greater than or equal to 2, and N is greater than or equal to M, then, for the problem of how to allocate N queues to M network card devices, current practice is generally to allocate the queues as needed, and a user needs to virtualize how many network card devices at the physical machine side, and then allocate corresponding amounts of queue resources to virtual network card devices. However, for hardware implementation convenience, queues need to be allocated continuously, i.e., the allocated queues are continuous inside the chip. Therefore, after the virtual network card device is created and deleted for multiple times, the problem that the plurality of released small queue spaces cannot be allocated to the large queue space application request due to discontinuity is easily caused, and then the plurality of discontinuous released small queue spaces form 'holes' in the queue space, so that the utilization rate of the queue space is low.
Disclosure of Invention
The application provides a queue allocation method and device, which can reduce the formation of 'holes' in a queue space, thereby improving the utilization rate of the queue space.
In a first aspect, the present application provides a queue allocation method, including:
constructing a data structure unit, wherein the data structure unit is configured to store a plurality of different queue numbers S, the queue numbers S are the queue numbers to be applied by the virtual equipment and correspond to each queue number S, and the data structure unit is further configured to store addresses of at least one group of continuous S queues, wherein S is a positive integer;
receiving a queue application request of the virtual device, if the number of queues to be applied by the queue application request is matched with any queue number S stored in the data structure unit, allocating the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S to the queue application request, and marking the addresses of the group of continuous S queues allocated to the queue application request as occupied;
receiving a queue release request of the virtual device, and marking the addresses of a group of continuous S queues to be released by the queue release request as free.
In a second aspect, the present application provides a queue allocation apparatus, including:
the data structure building module is used for building a data structure unit, the data structure unit is configured to store a plurality of different queue numbers S, the queue numbers S are the queue numbers to be applied by the virtual equipment, and the data structure unit is also configured to store the addresses of at least one group of continuous S queues corresponding to each queue number S, wherein S is a positive integer;
the queue application request module is used for receiving the queue application request of the virtual equipment, if the number of queues to be applied by the queue application request is matched with any queue number S stored in the data structure unit, the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S are allocated to the queue application request, and the addresses of the group of continuous S queues allocated to the queue application request are marked as occupied;
and the queue release request module is used for receiving the queue release request of the virtual equipment and marking the addresses of a group of continuous S queues to be released by the queue release request as idle.
Further, the data structure unit includes an array having a length L, the array being configured to store a plurality of different queue numbers S, the ith array element storing the number of queues to be applied by the virtual device S ═ S [ i ], where L, i and S [ i ] are positive integers, i is greater than or equal to 1, and i is less than or equal to L.
Further, the array is configured to store a plurality of queue numbers S set according to a first rule, wherein the first rule is that the queue number S [ i ] stored by the ith array element is i-th power of the minimum queue number to be applied by the virtual device, and the queue data S [ L ] stored by the lth array element is the maximum queue number to be applied by the virtual device.
Furthermore, the data structure unit also comprises L bidirectional linked lists, the ith bidirectional linked list comprises T [ i ] nodes, the ith bidirectional linked list takes the ith array element in the array as a head node, and each node except the head node stores the addresses of the continuous S [ i ] queues, wherein T [ i ] is a positive integer.
Furthermore, the ith bi-directional linked list comprises T [ i ] nodes set according to a second rule, wherein the second rule is that if the number of queues configured for the ith bi-directional linked list is the product of the Si and the T [ i ] -1, the total number of the queues configured for the L bi-directional linked lists is equal to the number of queues configured in the chip, wherein the product of the T [ i ] -1 and the Si is equal to the S [ L ], and if the S [ i ] is greater than the S [ i-1], the T [ i ] is less than the T [ i-1 ]; if Si is less than Si-1, then Ti is greater than Ti-1.
Further, when the number of queues to be applied by the queue application request matches the number of queues S stored in the data structure unit, the queue application request module is further configured to,
if the addresses of a group of continuous S [ i ] queues stored by each node in the ith bi-directional linked list are marked as occupied, and the address of a group of idle continuous S [ j ] queues is stored by the kth node of the jth bi-directional linked list, wherein S [ i ] is less than S [ j ], j and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ],
connecting the kth node of the jth bidirectional linked list to the tail of the ith bidirectional linked list, splitting the addresses of a group of continuous S [ j ] queues stored by the kth node into multiple groups of addresses of continuous S [ i ] queues, and allocating one group of addresses in the multiple groups of addresses of the continuous S [ i ] queues to the queue application request and marking the addresses as occupied.
Further, upon receiving a queue release request of the virtual device and marking addresses of a set of consecutive S queues to be released by the queue release request as free, the queue release request module is further configured to,
if the address of a group of continuous Si queues to be released by the queue release request is one of the addresses of multiple groups of continuous Si queues obtained by splitting the address of a group of continuous Sj queues of the kth node of the jth bidirectional linked list, when other group addresses in the addresses of the multiple groups of continuous Si queues are idle, the kth node of the jth bidirectional linked list connected to the tail of the ith bidirectional linked list is restored to the position of the kth node of the jth bidirectional linked list.
Further, when the number of queues to be applied by the queue application request matches the number of queues S stored in the data structure unit, the queue application request module is further configured to,
if each node in the ith doubly-linked list stores a set of consecutive Si [ i ]]The addresses of the queues are marked as occupied, and the k to k of the jth doubly linked list
Figure BDA0003493936230000041
Each node stores a free group of continuous Sj]An address of a queue, wherein S [ i ]]Greater than Sj]J and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ]],
Then the kth to jth doubly linked list is added
Figure BDA0003493936230000042
Each node is connected to the tail of the ith doubly linked list and leads the kth to the kth
Figure BDA0003493936230000043
A set of consecutive Sj stored by each node]The addresses of the queues are merged into a set of consecutive Si]Addresses of queues, and merging into a set of consecutive Si]The address of each queue is assigned to a queue request and marked as occupied.
Further, upon receiving a queue release request of the virtual device and marking addresses of a set of consecutive S queues to be released by the queue release request as free, the queue release request module is further configured to,
a set of consecutive Si [ i ] S to be released if a queue release request is requested]The addresses of the queue are from the k to the k of the jth doubly linked list
Figure BDA0003493936230000044
A set of consecutive Sj stored by each node]Merging the addresses of the queues, and connecting the kth to the kth of the jth doubly linked list connected to the tail of the ith doubly linked list
Figure BDA0003493936230000045
Each node is connected to the kth to the jth bidirectional linked list in a recovery mode
Figure BDA0003493936230000046
The location of each node.
In a third aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of the queue allocation method provided in the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the queue allocation method provided by the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program or instructions which, when executed by a processor, implement the steps of the queue allocation method provided by the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a queue allocation method according to a first embodiment of the present application;
fig. 2 is a schematic diagram of a data structure unit constructed in a queue allocation method according to a first embodiment of the present application;
fig. 3 is a schematic diagram illustrating a queue allocating method according to a first embodiment of the present application, where a large queue space is split into small queue spaces;
fig. 4 is a schematic diagram illustrating a method for allocating queues according to a first embodiment of the present application, in which small queue spaces are combined into a large queue space;
fig. 5 is a block diagram of a queue allocation apparatus according to a second embodiment of the present application;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Based on the current queue allocation mode, after the virtual network card device is created and deleted for multiple times, the problem that multiple released small queue spaces cannot be allocated to a large queue space application request due to discontinuity is easily caused, and then the multiple discontinuous released small queue spaces form a 'hole' in the queue space, so that the utilization rate of the queue space is low.
It is understood that the small queue space referred to herein means that the small queue space includes a smaller number of queues than the large queue space, relative to the large queue space.
The application provides a queue allocation method which can solve the problems existing in the current queue allocation mode. The queue allocation method comprises the following steps: constructing a data structure unit, wherein the data structure unit is configured to store a plurality of different queue numbers S, the queue numbers S are the queue numbers to be applied by the virtual equipment and correspond to each queue number S, and the data structure unit is further configured to store addresses of at least one group of continuous S queues, wherein S is a positive integer; receiving a queue application request of the virtual device, if the number of queues to be applied by the queue application request is matched with any queue number S stored in the data structure unit, allocating the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S to the queue application request, and marking the addresses of the group of continuous S queues allocated to the queue application request as occupied; receiving a queue release request of the virtual device, and marking the addresses of a group of continuous S queues to be released by the queue release request as free.
According to the queue allocation method, the queues to be allocated are divided into a plurality of queue spaces with the sizes corresponding to the number of the queues to be applied by the virtual equipment, and the queue spaces with the same size form a row, so that a plurality of rows of queue spaces with different sizes are formed. When the queue space is distributed, the queue space can be distributed in the queue preferentially, when the small queue space is not distributed enough, the large queue space can be split, and when the large queue space is not distributed enough, the small queue space can be combined, so that the formation of holes in the queue space can be reduced, and the utilization rate of the queue space is improved.
To elaborate on the present application, in a first embodiment, the present application provides a queue allocation method. Referring to fig. 1, fig. 1 is a schematic flow chart of a queue allocation method according to a first embodiment of the present application, where the queue allocation method may include the following steps:
step S10, constructing a data structure unit, where the data structure unit is configured to store a plurality of different queue numbers S, where the queue number S is the number of queues to be applied by the virtual device, and the data structure unit is further configured to store addresses of at least one group of S consecutive queues corresponding to each queue number S.
According to the queue allocation method, when the chip is powered on and operated, a system on the chip can construct a data structure unit. The data structure unit is configured to store a plurality of different queue numbers S, where the queue number S is the number of queues for which the virtual device is to apply, and S is a positive integer. And the data structure unit is also configured to store the addresses of at least one set of consecutive S queues, corresponding to each queue number S.
It will be appreciated that the chip described herein may be a data processing chip, such as a DPU. And the system on chip may be a system on chip, e.g., an SOC system, etc.
In a variation of the first embodiment, the data structure unit may comprise an array of length L, where L is a positive integer. The array is configured to store a plurality of different queue numbers S, and the ith array element of the array stores the queue number S to be applied by the virtual device, wherein i and S [ i ] are positive integers, i is greater than or equal to 1, and i is less than or equal to L. The number of queues to be applied by the virtual equipment is stored through the array, the number of queues to be applied by the virtual equipment possibly can be classified and managed, further, the queues to be distributed can be divided into a plurality of queue spaces with the size corresponding to the number of queues to be applied by the virtual equipment according to the number of queues to be applied by the virtual equipment possibly, and the queue spaces with the same size form a row, so that a plurality of rows of queue spaces with different sizes are formed.
It can be understood that the virtual device may be a virtual network card device virtualized by the physical machine, or may also be other virtual devices such as a virtual video card device virtualized by the physical machine, a virtual sound card device, and the like. Wherein the physical machine may be a server providing cloud computing services.
In the first embodiment or a modified embodiment of the first embodiment of the present application, the array is configured to store a plurality of queue numbers S set according to a first rule, where the first rule is a rule preset by the queue allocation method of the present application before the data structure unit is constructed. Specifically, the first rule may be: the queue number Si stored by the ith array element is i-th power of the minimum queue number to be applied by the virtual device, and the queue data SL stored by the Lth array element is the maximum queue number to be applied by the virtual device. Typically, the number of queues requested by a virtual device is a power of 2.
It should be noted that the first rule of the present application may also be other similar rules for setting the number S of the plurality of queues, and the present application is not limited to the first rule provided in the first embodiment or the modified embodiment of the first embodiment, and other rules similar to the first rule provided in the first embodiment or the modified embodiment of the first embodiment are within the scope of the present application.
In a variant of the first embodiment, the data structure unit may further comprise L doubly linked lists. The ith doubly linked list of the L doubly linked lists includes T [ i ] nodes. T [ i ] is a positive integer, and T [ i ] represents the number of nodes included in the ith doubly linked list. The ith doubly linked list uses the ith array element in the array as a head node, and each node except the head node stores the addresses of the consecutive S [ i ] queues.
It will be appreciated that the addresses of the queues refer to the physical addresses of the queues in the memory space of the physical machine, and that the addresses of successive S [ i ] queues refer to the addresses of S [ i ] queues being successive.
In the first embodiment or the modified embodiment of the first embodiment of the present application, the ith doubly linked list includes T [ i ] nodes set according to a second rule, where the second rule is a rule preset before the data structure unit is constructed by the queue allocation method of the present application. Specifically, the second rule may be: if the configured queue number of the ith two-way linked list is the product of Si and T [ i ] -1, the total configured queue number of the L two-way linked lists is equal to the queue number in the chip, wherein the product of T [ i ] -1 and Si is equal to SL, and if S [ i ] is larger than S [ i-1], T [ i ] is smaller than T [ i-1 ]; if Si is less than Si-1, then Ti is greater than Ti-1.
It should be noted that the second rule in this application may also be other similar rules for setting T [ i ] nodes of the ith doubly linked list, and the application is not limited to the second rule provided in the first embodiment or the modified embodiment of the first embodiment, and other rules similar to the second rule provided in the first embodiment or the modified embodiment of the first embodiment are within the protection scope of the application.
For easy understanding of the queue allocation method provided in the first embodiment or the modified embodiment of the first embodiment of the present application, please refer to fig. 2, where the present application illustrates a construction process of a data structure unit in conjunction with fig. 2.
It is assumed that 24 queues are configured inside the chip, that is, the number of queues available for allocation inside the chip is 24, the minimum number of queues for virtual device application is 2, and the maximum number of queues for virtual device application is 8.
Then, the queue allocation method of the present application may set the first rule to:
the number S [1] of queues stored in the 1 st array element 201 is 1 st power of the minimum number 2 of queues to be applied by the virtual device, that is, the number S of queues stored in the 1 st array element 201 is 2.
The number of queues S [2] stored in the 2 nd array element 202 is 2 times the minimum number of queues to be applied for by the virtual device, i.e., the number of queues S stored in the 2 nd array element 202 is 4.
The queue number S [3] stored in the 3 rd array element 203 is 3 times of the minimum queue number 2 to which the virtual device applies, that is, the queue number S stored in the 3 rd array element 203 is 8.
That is, the length L of the array is 3. Accordingly, the second rule may be set to:
with the 1 st array element 201 as the head node, the 1 st doubly linked list 211 can be configured with 4 nodes besides the head node: the device comprises a Q0-1 node, a Q2-3 node, a Q4-5 node and a Q6-7 node, wherein the Q0-1 node represents two queues comprising Q0 and Q1, the Q2-3 node represents two queues comprising Q2 and Q3, the Q4-5 node represents two queues comprising Q4 and Q5, and the Q6-7 node represents two queues comprising Q6 and Q7. The number of queues configured for the 1 st doubly linked list 211 is the product of S [1] and T [1] -1, i.e., the number of queues configured for the 1 st doubly linked list 211 is the product of 2 and 4.
With the 2 nd array element 202 as the head node, the 2 nd doubly linked list 212 may also configure 2 nodes in addition to the head node: q8-11 and Q12-15, wherein the nodes Q8-11 represent four queues including Q8, Q9, Q10 and Q1, and the nodes Q12-15 represent four queues including Q12, Q13, Q14 and Q15. The number of queues to which the 2 nd doubly linked list 212 is configured is the product of S [2] and T [2] -1, i.e., the number of queues to which the 2 nd doubly linked list 212 is configured is the product of 4 and 2.
The 3 rd array element 203 is used as a head node, and the 3 rd doubly linked list 213 can be configured with 1 node Q16-23 besides the head node, wherein the Q16-23 nodes represent eight queues including Q16, Q17, Q18, Q19, Q20, Q21, Q22 and Q23. The 3 rd doubly linked list 213 is configured with the number of queues that is the product of S [3] and T [3] -1, i.e., the 3 rd doubly linked list 213 is configured with the number of queues that is the product of 8 and 1.
Referring to fig. 1 again, in the first embodiment, the queue allocation method of the present application further includes the following steps:
step S20, receiving a queue application request from the virtual device, if the number of queues to be applied by the queue application request matches any queue number S stored in the data structure unit, allocating the addresses of a free set of consecutive S queues stored in the data structure unit corresponding to the matched queue number S to the queue application request, and marking the addresses of the set of consecutive S queues allocated to the queue application request as occupied.
Specifically, when a virtual device is created, the system of the chip receives a queue application request of the virtual device. The system of the chip matches the queue number S stored in the data structure unit with the queue number to be applied by the virtual device one by one in a traversal mode so as to judge whether the queue number to be applied by the virtual device exists in the data structure unit. If the number of queues to be applied by the queue application request of the virtual device is matched with the number S of the queues stored in the data structure unit, namely the number of queues to be applied by the virtual device exists in the data structure unit, allocating the addresses of a group of free continuous S queues stored in the data structure unit corresponding to the matched number S of queues to the queue application request, and marking the addresses of the group of continuous S queues allocated to the queue application request as occupied. If the number of queues to be applied by the queue application request of the virtual device is not matched with any queue number S stored in the data structure unit, namely the number of queues to be applied by the virtual device does not exist in the data structure unit, the system feedback application of the chip fails.
In a modified embodiment of the first embodiment, when the number of queues to be applied for the queue application request matches the number of queues S stored in the data structure unit, the queue allocation method of the present application further includes,
if the addresses of a group of continuous S [ i ] queues stored by each node except the head node in the ith bidirectional linked list are marked as occupied, and the address of a group of idle continuous S [ j ] queues is stored in the kth node of the jth bidirectional linked list, connecting the kth node of the jth bidirectional linked list to the tail of the ith bidirectional linked list, splitting the address of the group of continuous S [ j ] queues stored by the kth node into the addresses of multiple groups of continuous S [ i ] queues, allocating one group of addresses of the multiple groups of continuous S [ i ] queues to a queue application request, and marking the group of addresses allocated to the queue application request as occupied. Wherein, Si is less than Si, j and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to Ti.
For ease of understanding, the process of splitting a large queue space into small queue spaces is illustrated in conjunction with fig. 2 and 3.
Specifically, if the set of consecutive S [2] ═ 4 queue addresses stored by each node except the head node in the 2 nd doubly-linked list 212 are all marked as occupied, and the 1 st node (Q16-23) of the 3 rd doubly-linked list 213 stores a free set of consecutive S [3] ═ 8 queue addresses, the 1 st node of the 3 rd doubly-linked list 213 is connected to the tail of the 2 nd doubly-linked list 212, the set of consecutive S [3] ═ 8 queue addresses stored by the 1 st node is split into two sets of consecutive S [2] ═ 4 queue addresses, and one of the two sets of consecutive S [2] ═ 4 queue addresses is assigned to the queue application request, and the set of addresses assigned to the queue application request is marked as occupied. When the small queue space is not distributed enough, the formation of holes in the queue space can be reduced by splitting the large queue space, and the utilization rate of the queue space is improved.
In a modified embodiment of the first embodiment, when the number of queues to be applied for the queue application request matches the number of queues S stored in the data structure unit, the queue allocation method of the present application further includes,
if each node in the ith doubly-linked list except the head node stores a group of continuous S [ i]The addresses of each queue are marked as occupied, and the k-th to the jth doubly-linked list
Figure BDA0003493936230000091
Each node stores a free group of continuous Sj]An address of a queue, wherein S [ i ]]Greater than Sj]J and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ]],
Then the kth to jth doubly linked list is added
Figure BDA0003493936230000101
Each node is connected to the tail of the ith doubly linked list and leads the kth to the kth
Figure BDA0003493936230000102
A set of consecutive Sj stored by each node]The addresses of the queues are merged into a set of consecutive Si]Addresses of queues, and merging into a set of consecutive Si]Addresses of the queues are assigned to queue requests, and a set of consecutive S [ i ] S assigned to queue requests is assigned]The address of each queue is marked as occupied.
For ease of understanding, the process of combining small queue spaces into large queue spaces is illustrated in conjunction with fig. 2 and 4.
Specifically, if the addresses of a set of consecutive S [2] ═ 4 queues stored by each node other than the head node in the 2 nd doubly linked list 212 are all marked as occupied, and the 3 rd node and the 4 th node of the 1 st doubly linked list 211 except the head node respectively store a free group of continuous S [1] ═ 2 queue addresses, the 3 rd node and the 4 th node of the 1 st doubly-linked list 211 are integrally connected to the tail of the 2 nd doubly-linked list 212, a group of continuous S [1] ═ 2 queue addresses stored by the 3 rd node and the 4 th node respectively are merged into a group of continuous S [2] ═ 4 queue addresses, and assigning addresses to queue requests that are merged into a set of consecutive S [2] ═ 4 queues, and marks the address of a set of consecutive S [2] ═ 4 queues allocated to the queue request as occupied.
Referring to fig. 1 again, in the first embodiment, the queue allocation method of the present application further includes the following steps:
step S30, receiving a queue release request from the virtual device, and marking addresses of a group of S consecutive queues to be released by the queue release request as free.
Specifically, when the virtual device is deleted, the system of the chip receives a queue release request of the virtual device. The system of the chip marks the addresses of a set of consecutive S queues to be freed for a queue release request as free.
In a modified embodiment of the first embodiment, the method for allocating queues, after the step of marking addresses of a group of consecutive S queues to be released by a queue release request as idle when receiving the queue release request of a virtual device, further includes:
if the address of a group of continuous Si queues to be released by the queue release request is one of the addresses of multiple groups of continuous Si queues obtained by splitting the address of a group of continuous Sj queues of the kth node of the jth bidirectional linked list, when the addresses of the multiple groups of continuous Si queues are idle, the kth node of the jth bidirectional linked list connected to the tail of the ith bidirectional linked list is restored to the position of the kth node of the jth bidirectional linked list.
For ease of understanding, the queue release process after the large queue space is split into small queue spaces is illustrated in conjunction with fig. 2 and 3.
Specifically, if the set of consecutive S [2] to be released by the queue release request is the set of addresses of the two consecutive S [2] queues obtained by splitting the set of consecutive S [3] of the 1 st node of the 3 rd doubly linked list 213 except the head node into the addresses of the 8 queues, the 1 st node of the 3 rd doubly linked list 213 connected to the tail of the 2 nd doubly linked list 212 is restored to the position of the 1 st node of the 3 rd doubly linked list 213 when the other set of addresses of the two consecutive S [2] queues is idle.
In a modified embodiment of the first embodiment, the method for allocating queues, after the step of marking addresses of a group of consecutive S queues to be released by a queue release request as idle when receiving the queue release request of a virtual device, further includes:
a set of consecutive Si [ i ] S to be released if a queue release request is requested]The addresses of the queue are from the k to the k of the jth doubly linked list
Figure BDA0003493936230000111
A set of consecutive Sj stored by each node]Merging the addresses of the queues, and connecting the kth to the kth of the table tail of the ith doubly-linked list
Figure BDA0003493936230000112
Each node is connected to the kth to the jth bidirectional linked list in a recovery mode
Figure BDA0003493936230000113
The location of each node.
For ease of understanding, the queue release process after the small queue spaces are combined into the large queue space is illustrated in conjunction with fig. 2 and 4.
Specifically, if a group of consecutive S [2] ═ 4 queue addresses to be released by the queue release request are obtained by merging a group of consecutive S [1] ═ 2 queue addresses stored by the 3 rd node to the 4 th node of the 1 st doubly-linked list 211, respectively, the 3 rd node to the 4 th node of the 1 st doubly-linked list 211 connected to the table tail of the 2 nd doubly-linked list 212 are restored to the positions connected to the 3 rd node to the 4 th node of the 1 st doubly-linked list 211.
In the queue allocation method provided in the first embodiment or the modified embodiment of the first embodiment of the present application, the queues to be allocated are divided into a plurality of queue spaces having sizes corresponding to the number of queues to be applied by the virtual device, and the queue spaces having the same size form a row, thereby forming a plurality of rows of queue spaces having different sizes. When the queue space is distributed, the queue space can be distributed in the queue preferentially, when the small queue space is not distributed enough, the large queue space can be split, and when the large queue space is not distributed enough, the small queue space can be combined, so that the formation of holes in the queue space can be reduced, and the utilization rate of the queue space is improved.
A second embodiment of the present application provides a queue allocation apparatus, please refer to fig. 5, the queue allocation apparatus includes: a data structure building block 501.
A data structure constructing module 501, configured to construct a data structure unit. The data structure unit is configured to store a plurality of different queue numbers S, where S is a positive integer. The number of queues S is the number of queues to be applied by the virtual device. And the data structure unit is further configured to store addresses of at least one set of consecutive S queues, corresponding to each number S of queues.
In a variation of the second embodiment, the data structure unit includes an array of length L, the array configured to store a plurality of different queue numbers S, the ith array element stores the queue number S to be applied by the virtual device as S [ i ], where L, i and S [ i ] are positive integers, i is greater than or equal to 1, and i is less than or equal to L.
In a variation of the second embodiment, the array is configured to store a plurality of queue numbers S set according to a first rule, wherein the first rule is that the queue number S [ i ] stored by the ith array element is i-th power of the minimum queue number to be applied by the virtual device, and the queue data S [ L ] stored by the lth array element is the maximum queue number to be applied by the virtual device.
In a modified embodiment of the second embodiment, the data structure unit further includes L doubly linked lists, the ith doubly linked list includes T [ i ] nodes, and the ith doubly linked list uses the ith array element in the array as a head node, and each node except the head node stores addresses of consecutive S [ i ] queues, where T [ i ] is a positive integer.
In a modified embodiment of the second embodiment, the ith doubly linked list includes T [ i ] nodes set according to a second rule, where the second rule is that if the number of queues configured for the ith doubly linked list is the product of S [ i ] and ti ] -1, the total number of queues configured for the L doubly linked lists is equal to the number of queues configured inside the chip, where the product of ti ] -1 and S [ i ] is equal to sl, and if S [ i ] is greater than S [ i-1], T [ i ] is less than T [ i-1 ]; if Si is less than Si-1, then Ti is greater than Ti-1.
Referring to fig. 4, the queue allocating apparatus according to the second embodiment of the present application may further include: queue application request module 502.
The queue application request module 502 is configured to receive a queue application request of a virtual device, and if the number of queues to be applied by the queue application request is matched with a queue number S stored in a data structure unit, allocate, to the queue application request, addresses of a set of idle consecutive S queues stored in the data structure unit corresponding to the matched queue number S, and mark the addresses of the set of consecutive S queues allocated to the queue application request as occupied.
In a modified embodiment of the second embodiment, when the number of queues to be requested by the queue request matches the number S of queues stored in the data structure unit, the queue request module 502 is further configured to:
if the addresses of a group of continuous S [ i ] queues stored by each node in the ith bi-directional linked list are marked as occupied, and the address of a group of idle continuous S [ j ] queues is stored by the kth node of the jth bi-directional linked list, wherein S [ i ] is less than S [ j ], j and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ],
connecting the kth node of the jth bidirectional linked list to the tail of the ith bidirectional linked list, splitting the addresses of a group of continuous S [ j ] queues stored by the kth node into multiple groups of addresses of continuous S [ i ] queues, and allocating one group of addresses in the multiple groups of addresses of the continuous S [ i ] queues to the queue application request and marking the addresses as occupied.
In a modified embodiment of the second embodiment, when the number of queues to be requested by the queue request matches the number of queues S stored in the data structure unit, the queue request module 502 is further configured to,
if each node in the ith doubly-linked list stores a set of consecutive Si [ i ]]The addresses of the queues are marked as occupied, and the k to k of the jth doubly linked list
Figure BDA0003493936230000131
Each node stores a free group of continuous Sj]An address of a queue, wherein S [ i ]]Greater than Sj]J and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ]],
The kth to jth doubly linked list is added
Figure BDA0003493936230000132
Each node is connected to the tail of the ith doubly linked list and leads the kth to the kth
Figure BDA0003493936230000133
A set of consecutive Sj stored by each node]The addresses of the queues are merged into a set of consecutive Si]Addresses of queues, and merging into a set of consecutive Si]The address of each queue is assigned to a queue request and marked as occupied.
Referring to fig. 4, the queue allocating apparatus according to the second embodiment of the present application may further include: queue release request module 503.
A queue release request module 503, configured to receive a queue release request of the virtual device, and mark addresses of a group of consecutive S queues to be released by the queue release request as idle.
In a modified embodiment of the second embodiment, when receiving a queue release request of a virtual device, and marking addresses of a group of S consecutive queues to be released by the queue release request as free, the queue release request module 503 is further configured to:
if the address of a group of continuous Si queues to be released by the queue release request is one of the addresses of multiple groups of continuous Si queues obtained by splitting the address of a group of continuous Sj queues of the kth node of the jth bidirectional linked list, when other group addresses in the addresses of the multiple groups of continuous Si queues are idle, the kth node of the jth bidirectional linked list connected to the tail of the ith bidirectional linked list is restored to the position of the kth node of the jth bidirectional linked list.
In a modified embodiment of the second embodiment, when receiving a queue release request of a virtual device, and marking addresses of a group of S consecutive queues to be released by the queue release request as free, the queue release request module 503 is further configured to:
a set of consecutive Si [ i ] S to be released if a queue release request is requested]The addresses of the queue are from the k to the k of the jth doubly linked list
Figure BDA0003493936230000141
A set of consecutive Sj stored by each node]Merging the addresses of the queues, and connecting the kth to the kth of the jth doubly linked list connected to the tail of the ith doubly linked list
Figure BDA0003493936230000142
Each node is connected to the kth to the jth bidirectional linked list in a recovery mode
Figure BDA0003493936230000143
The location of each node.
In the second embodiment or the modified embodiment of the second embodiment of the present application, the queue to be allocated is divided into a plurality of queue spaces having sizes corresponding to the number of queues to be applied by the virtual device, and the queue spaces having the same size form one row, thereby forming a plurality of rows of queue spaces having different sizes. When the queue space is distributed, the queue space can be distributed in the queue preferentially, when the small queue space is not distributed enough, the large queue space can be split, and when the large queue space is not distributed enough, the small queue space can be combined, so that the formation of holes in the queue space can be reduced, and the utilization rate of the queue space is improved.
It should be noted that, there is a corresponding relationship between the queue allocation device provided in the second embodiment or the modified embodiment of the second embodiment of the present application and the queue allocation method provided in the first embodiment or the modified embodiment of the first embodiment of the present application, and reference may be made to the detailed description of the first embodiment or the modified embodiment of the first embodiment of the present application for the modified embodiment of the second embodiment or the second embodiment of the present application.
The present application further provides an electronic device that may include a processor and a memory. The memory has stored thereon a computer program or instructions. The computer program or instructions, when executed by the processor, may implement the queue allocation method provided in the first embodiment of the present application or the modified embodiment of the first embodiment.
It will be appreciated that the electronic device may be a data processing chip, a network interface card, a server, a mobile terminal, etc.
When the electronic device is a data processing chip, the data processing chip includes a processor and a memory. The memory has stored thereon a computer program or instructions. The computer program or instructions, when executed by the processor, may implement the queue allocation method provided in the first embodiment of the present application or the modified embodiment of the first embodiment.
When the electronic device is a network interface card, the network interface card includes at least one data processing chip, which may be an FPGA chip with CPU capability or an ASIC chip with SOC capability. The FPGA chip or the ASIC chip comprises the processor and the memory. The memory has stored thereon a computer program or instructions. The computer program or instructions, when executed by the processor, may implement the queue allocation method provided in the first embodiment or the modified embodiment of the first embodiment of the present application.
When the electronic device is a server, the server may comprise a network interface card comprising at least one data processing chip, which may be an FPGA chip with CPU capabilities or an ASIC chip with SOC capabilities. The FPGA chip or the ASIC chip comprises the processor and the memory. The memory has stored thereon a computer program or instructions. The computer program or instructions, when executed by the processor, may implement the queue allocation method provided in the first embodiment of the present application or the modified embodiment of the first embodiment.
The server may include a single computer device, a server cluster composed of a plurality of servers, or a server structure of a distributed apparatus.
The present application also provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements part or all of the steps of the queue allocation method provided in the first embodiment or the modified embodiment of the first embodiment.
The present application also provides a computer program product. The computer program product comprises a computer program or instructions. Which when executed by a processor implements some or all of the steps of the queue allocation method.
As will be appreciated by one skilled in the art, embodiments of the present application may provide a method, apparatus, computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the present application are explained by applying specific embodiments in the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present application should not be construed as a limitation to the present application.

Claims (13)

1. A method of queue allocation, the method comprising:
constructing a data structure unit, wherein the data structure unit is configured to store a plurality of different queue numbers S, the queue number S is the number of queues to be applied by the virtual equipment, and corresponds to each queue number S, the data structure unit is also configured to store addresses of at least one group of continuous S queues, and S is a positive integer;
receiving a queue application request of the virtual device, if the number of queues to be applied by the queue application request is matched with any queue number S stored in the data structure unit, allocating the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S to the queue application request, and marking the addresses of the group of continuous S queues allocated to the queue application request as occupied;
receiving a queue release request of the virtual device, and marking the addresses of a group of continuous S queues to be released by the queue release request as free.
2. The method of claim 1, wherein the data structure unit comprises an array of length L, the array configured to store a plurality of different numbers of queues S, the ith array element storing the number of queues S ═ si to which the virtual device is applied, wherein L, i and S [ i ] are positive integers, i is greater than or equal to 1, and i is less than or equal to L.
3. The queue allocation method according to claim 2, wherein the array is configured to store a plurality of queue numbers S set according to a first rule, wherein the first rule is that the queue number S [ i ] stored by the ith array element is i-th power of the minimum queue number to be applied by the virtual device, and the queue data S [ L ] stored by the lth array element is the maximum queue number to be applied by the virtual device.
4. A queue allocation method according to claim 2 or 3, characterised in that the data structure unit further comprises L doubly linked lists, the ith doubly linked list comprises T [ i ] nodes, and the ith doubly linked list has the ith array element in the array as the head node, each node other than the head node storing the addresses of consecutive S [ i ] queues, where T [ i ] is a positive integer.
5. The queue assignment method of claim 4, wherein the ith doubly linked list includes T [ i ] nodes set according to a second rule, wherein the second rule is that if the number of queues configured for the ith doubly linked list is the product of si and ti-1, the total number of queues configured for the L doubly linked lists is equal to the number of queues configured inside the chip, wherein the product of ti-1 and si is equal to sl, and if si is greater than si-1, ti is less than ti-1; if Si is less than Si-1, then Ti is greater than Ti-1.
6. The method of queue allocation according to claim 4 or 5, wherein, when the number of queues to be applied for a queue application request matches a number of queues S stored in the data structure unit, the method further comprises,
if the addresses of a group of continuous S [ i ] queues stored by each node in the ith bi-directional linked list are marked as occupied, and the address of a group of idle continuous S [ j ] queues is stored by the kth node of the jth bi-directional linked list, wherein S [ i ] is less than S [ j ], j and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ],
connecting the kth node of the jth bidirectional linked list to the tail of the ith bidirectional linked list, splitting the addresses of a group of continuous S [ j ] queues stored by the kth node into multiple groups of addresses of continuous S [ i ] queues, and allocating one group of addresses in the multiple groups of addresses of the continuous S [ i ] queues to the queue application request and marking the addresses as occupied.
7. The method of claim 6, wherein after the step of marking addresses of a set of S consecutive queues to be released by the queue release request as free when receiving the queue release request from the virtual device, the method further comprises:
if the address of a group of continuous Si queues to be released by the queue release request is one of the addresses of multiple groups of continuous Si queues obtained by splitting the address of a group of continuous Sj queues of the kth node of the jth bidirectional linked list, when the addresses of the multiple groups of continuous Si queues are idle, the kth node of the jth bidirectional linked list connected to the tail of the ith bidirectional linked list is restored to the position of the kth node of the jth bidirectional linked list.
8. The method of queue assignment according to claim 4 or 5, wherein, when the number of queues to be applied for a queue application request matches a number of queues S stored in the data structure unit, the method further comprises,
if each node in the ith doubly-linked list stores a set of consecutive Si [ i ]]The addresses of the queues are marked as occupied, and the k to k of the jth doubly linked list
Figure FDA0003493936220000021
Each node stores a free group of continuous Sj]An address of a queue, wherein S [ i ]]Greater than Sj]J and k are positive integers, j is less than or equal to L, k is greater than 1, and k is less than or equal to T [ i ]],
Then the kth to jth doubly linked list is added
Figure FDA0003493936220000031
Each node is connected to the tail of the ith doubly linked list and leads the kth to the kth
Figure FDA0003493936220000032
A set of consecutive Sj stored by each node]The addresses of the queues are merged into a set of consecutive Si]Addresses of queues, and merging into a set of consecutive Si]The address of each queue is assigned to a queue request and marked as occupied.
9. The method of claim 8, wherein after the step of marking addresses of a set of S consecutive queues to be released by the queue release request as free when receiving the queue release request from the virtual device, the method further comprises:
a set of consecutive Si [ i ] S to be released if a queue release request is requested]The addresses of the queue are from the k to the k of the jth doubly linked list
Figure FDA0003493936220000033
A set of consecutive Sj stored by each node]Merging the addresses of the queues, and connecting the kth to the kth of the jth doubly linked list connected to the tail of the ith doubly linked list
Figure FDA0003493936220000034
Each node is connected to the kth to the jth bidirectional linked list in a recovery mode
Figure FDA0003493936220000035
The location of each node.
10. A queue assignment arrangement, said arrangement comprising:
the data structure building module is used for building a data structure unit, the data structure unit is configured to store a plurality of different queue numbers S, the queue numbers S are the queue numbers to be applied by the virtual equipment, and the data structure unit is also configured to store the addresses of at least one group of continuous S queues corresponding to each queue number S, wherein S is a positive integer;
the queue application request module is used for receiving the queue application request of the virtual equipment, if the number of queues to be applied by the queue application request is matched with any queue number S stored in the data structure unit, the addresses of a group of idle continuous S queues stored in the data structure unit corresponding to the matched queue number S are allocated to the queue application request, and the addresses of the group of continuous S queues allocated to the queue application request are marked as occupied;
and the queue release request module is used for receiving the queue release request of the virtual equipment and marking the addresses of a group of continuous S queues to be released by the queue release request as idle.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the queue allocation method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the queue allocation method according to any one of claims 1 to 9.
13. A computer program product comprising a computer program or instructions for implementing the steps of the queue allocation method according to any one of claims 1 to 9 when executed by a processor.
CN202210107695.8A 2022-01-28 2022-01-28 Queue distribution method and device Pending CN114489952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210107695.8A CN114489952A (en) 2022-01-28 2022-01-28 Queue distribution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210107695.8A CN114489952A (en) 2022-01-28 2022-01-28 Queue distribution method and device

Publications (1)

Publication Number Publication Date
CN114489952A true CN114489952A (en) 2022-05-13

Family

ID=81477192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210107695.8A Pending CN114489952A (en) 2022-01-28 2022-01-28 Queue distribution method and device

Country Status (1)

Country Link
CN (1) CN114489952A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955447A (en) * 2023-03-13 2023-04-11 微网优联科技(成都)有限公司 Data transmission method, switch and switch system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955447A (en) * 2023-03-13 2023-04-11 微网优联科技(成都)有限公司 Data transmission method, switch and switch system

Similar Documents

Publication Publication Date Title
EP3608792B1 (en) Managed switching between one or more hosts and solid state drives (ssds) based on the nvme protocol to provide host storage services
US9563458B2 (en) Offloading and parallelizing translation table operations
US9413683B2 (en) Managing resources in a distributed system using dynamic clusters
US9552233B1 (en) Virtual machine migration using free page hinting
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
US9626221B2 (en) Dynamic guest virtual machine identifier allocation
KR102321913B1 (en) Non-volatile memory device, and memory system having the same
JP2014021972A (en) Methods and structure for improved flexibility in shared storage caching by multiple systems operating as multiple virtual machines
WO2017000645A1 (en) Method and apparatus for allocating host resource
US20170228190A1 (en) Method and system providing file system for an electronic device comprising a composite memory device
US9755986B1 (en) Techniques for tightly-integrating an enterprise storage array into a distributed virtualized computing environment
US20180239649A1 (en) Multi Root I/O Virtualization System
CN115988217A (en) Virtualized video coding and decoding system, electronic equipment and storage medium
US20190377612A1 (en) VCPU Thread Scheduling Method and Apparatus
CN115858103B (en) Method, device and medium for virtual machine hot migration of open stack architecture
CN115988218A (en) Virtualized video coding and decoding system, electronic equipment and storage medium
CN114281252A (en) Virtualization method and device for NVMe (network video recorder) device of nonvolatile high-speed transmission bus
CN114489952A (en) Queue distribution method and device
WO2020108536A1 (en) Virtual network resource allocation method and system and electronic device
CN116560803B (en) Resource management method and related device based on SR-IOV
US10397130B2 (en) Multi-cloud resource reservations
CN107967165B (en) Virtual machine offline migration method based on LVM
CN115150268A (en) Network configuration method and device of Kubernetes cluster and electronic equipment
CN109002347B (en) Virtual machine memory allocation method, device and system
CN114281516A (en) Resource allocation method and device based on NUMA attribute

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination