CN112433841B - Resource pool scheduling method, system, server and storage medium - Google Patents

Resource pool scheduling method, system, server and storage medium Download PDF

Info

Publication number
CN112433841B
CN112433841B CN201910792343.9A CN201910792343A CN112433841B CN 112433841 B CN112433841 B CN 112433841B CN 201910792343 A CN201910792343 A CN 201910792343A CN 112433841 B CN112433841 B CN 112433841B
Authority
CN
China
Prior art keywords
numa
node
numa node
resource pool
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910792343.9A
Other languages
Chinese (zh)
Other versions
CN112433841A (en
Inventor
陈琪
郭岳
钟储建
金天骄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910792343.9A priority Critical patent/CN112433841B/en
Publication of CN112433841A publication Critical patent/CN112433841A/en
Application granted granted Critical
Publication of CN112433841B publication Critical patent/CN112433841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a resource pool scheduling method, a system, a server and a storage medium, which are based on a Numa model and schedule by using a Numa index; the virtual resources can be repeatedly utilized by the same Numa node, the server of the Numa model is used, the Openstack resource pool scheduling technology is combined, the bottom resources are integrated, the resources of the data center are utilized to the maximum extent, the waste and redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The method has good popularization value for the Openstack resource pool environment of the data center; in addition, through using the Numa unified view, unified management of Numa nodes can be realized, and the fine-grained thermal migration resources are realized by combining the thermal migration capability of the virtual machine, so that the method has stronger practical value under a large-scale production environment, and on the basis of monitoring the Numa node load of the physical machine, the unified view according to the CPU frequency of the physical machine, the allocation rate of the Numa nodes and the utilization rate of the Numa nodes is generated.

Description

Resource pool scheduling method, system, server and storage medium
Technical Field
The invention relates to the technical field of computer virtualization resource scheduling, in particular to a resource pool scheduling method, a system, a server and a storage medium.
Background
OpenStack is an open-source cloud computing management platform project, and specific work is completed by combining several main components. The Openstack resource pool completes the scheduling task through a Nova-Scheduler, mainly through two steps: filter) +weight calculation (weighting). Filtering is to remove host computers which do not meet the conditions, and weight calculation is to sort the rest host computers according to a certain value, and calculate the most suitable host computers to turn on or migrate the virtual machines.
With the increase of processors, the general PC server now introduces NUMA (Non Uniform Memory Access Architecture) technology to prevent memory access from reaching the bottleneck in order to improve performance, and NUMA technology can make numerous servers behave like a single system while retaining the advantage of a small system that is convenient for programming and management. Based on the higher demands placed on memory access by e-commerce applications, NUMA also presents challenges to complex architectural designs. NUMA attempts to solve this problem by providing separate memories to each processor, avoiding the performance penalty that occurs when multiple processors access the same memory. For applications involving scattered data (common in server and server-like applications), NUMA can improve performance by a factor of n through a shared memory, where n is approximately the number of processors (or separate memories). Related hardware resources (such as CPU and memory) of the system are divided into a plurality of nodes in the prior art. In this model, the processor accesses the local node faster than the remote node, with a performance improvement of approximately 30% -40%. For this Numa model, there are two solutions to the Openstack resource pool: one is to use a numalology filter filtering method to deploy virtual machines on hosts with the same Numa node; the other is to force the use of the Dedicated/refer mode to fix the virtual machine on the same Numa node.
However, the existing Openstack resource pool scheduling scheme has the following drawbacks:
1) Numa model characteristics are not considered enough, and the resource effective rate is low
The Numa model is characterized in that: the CPU processor accesses the local node 30% -40% faster than the remote node. The filtering condition of the conventional resource pool scheduling method NUMATOPLOlogy Filter is only to filter out the computing nodes consistent with the Numa structure in the virtual machine, the model characteristics of the Numa are not considered, and the virtual machine is not placed in one Numa node, so that the resource efficiency is low.
2) Numa node index is lacking in weight calculation, and resource scheduling matching degree is low
The weight calculation orders the filtered hosts according to a certain value, and the current index is based on the available memory size of the computing node, the residual disk size, the IO load size and some basic index values (such as CPU utilization rate and the like).
The weight calculation does not match the technical index of the Numa structure and Numa nodes. This results in a low degree of matching of the resource schedule, which cannot be matched to the appropriate computing node.
3) The forced use of the Dedicated/preference mode has use limitation and low resource recycling rate
The virtual machines are fixed on the same Numa node in the deided/refer mode, and at present, the use limit exists, wherein the limit condition is that the virtual CPU corresponds to the physical CPU one by one, and a scene that a plurality of virtual machines share one physical CPU can not be realized in the deided/refer mode. This results in low resource utilization.
Disclosure of Invention
The present invention has been made in view of the above problems, and aims to provide a resource pool scheduling system and a corresponding resource pool scheduling method that overcome or at least partially solve the above problems.
According to one aspect of the present invention, there is provided a resource pool scheduling method including the steps of:
acquiring the Numa information of the computing node;
after each scheduling of the Openstack resource pool is completed, unifying a scheduling result to a Numa node view;
initializing a Numa node which is not called and allocated according to the Numa node view, and calculating allocation rate when the Numa node is allocated and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resource is scheduled, the Numa node information is counted, and a proper Numa node is filtered out;
calculating node weights of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node; if the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and after the dispatching is completed, updating the view information of the Numa nodes and the distribution rate information.
Optionally, the computing node Numa information includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
Optionally, the step of initializing a Numa node not called for allocation according to the Numa node view, and calculating the allocation rate after the Numa node is allocated and used by the Openstack resource pool further includes:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
Optionally, during the step of resource scheduling, the Numa node information is counted, and a filtering rule is that:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
According to another aspect of the present invention, there is provided a resource pool scheduling system including:
the static processing module is used for acquiring the Numa information of the computing node;
the dynamic processing module is used for unifying the dispatching result to a Numa node view after each dispatching of the Openstack resource pool is completed;
the node allocation submodule is used for initializing the Numa nodes which are not called and allocated according to the Numa node view, and calculating allocation rate after the Numa nodes are allocated and used by the Openstack resource pool;
the utilization rate calculation submodule is used for periodically updating the memory and CPU utilization rate of each Numa node;
the Openstack resource pool filtering module is used for counting the Numa node information and filtering out proper Numa nodes when the resources are scheduled;
the weight calculation module is used for calculating the node weight of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node;
if the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and the updating module is used for updating the view information of the Numa nodes and the distribution rate information after the dispatching is completed.
Optionally, the computing node Numa information includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
Optionally, the node allocation submodule is further configured to:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
Optionally, the Openstack resource pool filtering module filters rules as follows:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
According to still another aspect of the present invention, there is provided a server including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to execute an operation corresponding to the resource pool scheduling method described above.
According to still another aspect of the present invention, a computer storage medium has at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the resource pool scheduling method described above.
The resource pool scheduling method and system of the invention are based on a Numa model, and use Numa indexes for scheduling; the virtual resources can be repeatedly utilized by the same Numa node, the server of the Numa model is used, the Openstack resource pool scheduling technology is combined, the bottom resources are integrated, the resources of the data center are utilized to the maximum extent, the waste and redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The method has good popularization value for the Openstack resource pool environment of the data center; in addition, through using the Numa unified view, unified management of Numa nodes can be realized, and the fine-grained thermal migration resources are realized by combining the thermal migration capability of the virtual machine, so that the method has stronger practical value under a large-scale production environment, and on the basis of monitoring the Numa node load of the physical machine, the unified view according to the CPU frequency of the physical machine, the allocation rate of the Numa nodes and the utilization rate of the Numa nodes is generated.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic flow diagram of a resource pool scheduling method according to an embodiment of the present invention;
FIG. 2 illustrates a block diagram of a resource pool scheduling system, according to one embodiment of the invention;
fig. 3 shows a block diagram of a server according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
OpenStack is an open-source cloud computing management platform project, and specific work is completed by combining several main components.
NUMA (Non Uniform Memory Access Architecture) technology allows numerous servers to behave as a single system while retaining the advantage of a small system for ease of programming and management. Based on the higher demands placed on memory access by e-commerce applications, NUMA also presents challenges to complex architectural designs. NUMA attempts to solve this problem by providing separate memories to each processor, avoiding the performance penalty that occurs when multiple processors access the same memory. For applications involving scattered data (common in server and server-like applications), NUMA can improve performance by a factor of n through a shared memory, where n is approximately the number of processors (or separate memories).
Example 1
As shown in fig. 1, a resource pool scheduling method according to an exemplary embodiment of the present disclosure includes the following steps:
s11: acquiring the Numa information of the computing node;
in this step, the computing node Numa information includes, but is not limited to, a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
S12: after each scheduling of the Openstack resource pool is completed, unifying a scheduling result to a Numa node view;
specifically, after each scheduling of the Openstack resource pool is completed, the scheduling result is unified to a Numa node view, so that a basis is provided for future resource scheduling.
S13: initializing a Numa node which is not called and allocated according to the Numa node view, and calculating allocation rate when the Numa node is allocated and used by an Openstack resource pool;
specifically, the step of initializing a Numa node not called for allocation according to the Numa node view, and calculating the allocation rate after the Numa node is allocated and used by the Openstack resource pool further includes:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
The preset formula is as follows:
wherein i is the number of virtual machines using the Numa node, and Ci is the number of cores using the Numa node
N=numa node Core number vs Core thread number.
S14: periodically updating the memory and CPU utilization rate of each Numa node;
in this step, the statistics of the memory and CPU utilization of each Numa node are updated periodically. Wherein CPU utilization = max { CPU utilization, memory utilization }.
S15: when the resource is scheduled, the Numa node information is counted, and a proper Numa node is filtered out;
optionally, during the step of resource scheduling, the Numa node information is counted, and a filtering rule is that:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
S16: calculating node weights of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node;
specifically, the filtered calculation node calculates the weight of the Numa node through the weight, and the calculation rule is as follows:
wherein Li is LiCoche size/Min (LiCoche size)
If the weight value is calculated equally, a Numa node is selected randomly for scheduling;
s17: and after the dispatching is completed, updating the view information of the Numa nodes and the distribution rate information.
By adopting the method provided by the embodiment, based on a Numa model, a Numa index is used for scheduling; the virtual resources can be repeatedly utilized by the same Numa node, the server of the Numa model is used, the Openstack resource pool scheduling technology is combined, the bottom resources are integrated, the resources of the data center are utilized to the maximum extent, the waste and redundancy of the resources are reduced, and the resource utilization rate of the data center is improved. The method has good popularization value for the Openstack resource pool environment of the data center; in addition, through using the Numa unified view, unified management of Numa nodes can be realized, and the fine-grained thermal migration resources are realized by combining the thermal migration capability of the virtual machine, so that the method has stronger practical value under a large-scale production environment, and on the basis of monitoring the Numa node load of the physical machine, the unified view according to the CPU frequency of the physical machine, the allocation rate of the Numa nodes and the utilization rate of the Numa nodes is generated.
Example two
Fig. 2 is a resource pool scheduling system according to an exemplary embodiment of the present invention, as shown in fig. 2, including:
the static processing module 21 is configured to obtain Numa information of the computing node;
optionally, the computing node Numa information includes a computing node number, a Numa node number, a CPU frequency, a Core number, a Core thread number, an L1Cache size, an L2Cache size, and an L3Cache size.
The dynamic processing module 22 is configured to unify a scheduling result to a Numa node view after each scheduling of the Openstack resource pool is completed;
the node allocation submodule 221 is configured to initialize a Numa node that is not called for allocation according to the Numa node view, and calculate an allocation rate when the Numa node is allocated and used by an Openstack resource pool;
the usage rate calculation submodule 222 is used for periodically updating the memory and the CPU usage rate of each Numa node;
specifically, the node allocation submodule and the utilization rate calculation submodule are submodules of the dynamic processing module, and after each scheduling of the Openstack resource pool is completed, a scheduling result is unified to a Numa node view, so that a basis is provided for future resource scheduling.
The node assignment submodule is further configured to:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
The preset formula is as follows:
where i is the number of virtual machines using the Numa node, ci is the Core number of virtual machines using the Numa node, and n=numa node Core number×core thread number.
The Openstack resource pool filtering module 23 is configured to count the Numa node information during resource scheduling, and filter out a suitable Numa node;
optionally, the Openstack resource pool filtering module filters rules as follows:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
The weight calculation module 24 is configured to calculate node weights of the filtered Numa nodes, screen a Numa node with a maximum weight, and schedule the virtual machine to the Numa node;
the filtered calculation node calculates the weight of the Numa node through the weight, and the calculation rule is as follows:
wherein Li is LiCoche size/Min (LiCoche size)
If the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and an updating module 25, configured to update the Numa node view information and the allocation rate information after the scheduling is completed.
Example III
A third embodiment of the present application provides a non-volatile computer storage medium, where at least one executable instruction is stored, where the computer executable instruction may perform the resource pool scheduling method in any of the foregoing method embodiments.
Example IV
Fig. 3 is a schematic structural diagram of a server according to a sixth embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the server.
As shown in fig. 3, the server may include: a processor (processor), a communication interface (Communications Interface), a memory (memory), and a communication bus.
Wherein:
the processor, communication interface, and memory communicate with each other via a communication bus.
A communication interface for communicating with network elements of other devices, such as clients or other servers, etc.
And the processor is used for executing the program, and can specifically execute relevant steps in the embodiment of the resource pool scheduling method.
In particular, the program may include program code including computer-operating instructions.
The processor may be a central processing unit, CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the server may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory or may further comprise non-volatile memory, such as at least one disk memory.
The program may be specifically operative to cause the processor to: acquiring the Numa information of the computing node;
after each scheduling of the Openstack resource pool is completed, unifying a scheduling result to a Numa node view;
initializing a Numa node which is not called and allocated according to the Numa node view, and calculating allocation rate when the Numa node is allocated and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resource is scheduled, the Numa node information is counted, and a proper Numa node is filtered out;
calculating node weights of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node;
if the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and after the dispatching is completed, updating the view information of the Numa nodes and the distribution rate information.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a resource pool scheduling system according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (10)

1. The resource pool scheduling method is characterized by comprising the following steps of:
acquiring the Numa information of the computing node;
after each scheduling of the Openstack resource pool is completed, unifying a scheduling result to a Numa node view;
initializing a Numa node which is not called and allocated according to the Numa node view, and calculating allocation rate when the Numa node is allocated and used by an Openstack resource pool;
periodically updating the memory and CPU utilization rate of each Numa node;
when the resource is scheduled, the Numa node information is counted, and a proper Numa node is filtered out;
calculating node weights of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node;
if the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and after the dispatching is completed, updating the view information of the Numa nodes and the distribution rate information.
2. The method of claim 1, wherein the compute node Numa information includes compute node number, numa node number, CPU frequency, core number, core thread number, L1Cache size, L2Cache size, L3Cache size.
3. The method of claim 1, wherein the step of initializing a Numa node not called for allocation according to the Numa node view, and calculating an allocation rate when the Numa node is allocated for use by an Openstack resource pool further comprises:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
4. The method according to claim 1, wherein during the step of resource scheduling, the Numa node information is counted, and a filtering rule is that:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
5. A resource pool scheduling system, comprising:
the static processing module is used for acquiring the Numa information of the computing node;
the dynamic processing module is used for unifying the dispatching result to a Numa node view after each dispatching of the Openstack resource pool is completed;
the node allocation submodule is used for initializing the Numa nodes which are not called and allocated according to the Numa node view, and calculating allocation rate after the Numa nodes are allocated and used by the Openstack resource pool;
the utilization rate calculation submodule is used for periodically updating the memory and CPU utilization rate of each Numa node;
the Openstack resource pool filtering module is used for counting the Numa node information and filtering out proper Numa nodes when the resources are scheduled;
the weight calculation module is used for calculating the node weight of the filtered Numa nodes, screening the Numa node with the maximum weight, and dispatching the virtual machine to the Numa node;
if the weight value is calculated equally, a Numa node is selected randomly for scheduling;
and the updating module is used for updating the view information of the Numa nodes and the distribution rate information after the dispatching is completed.
6. The system of claim 5, wherein the compute node Numa information includes compute node number, numa node number, CPU frequency, core number, core thread number, L1Cache size, L2Cache size, L3Cache size.
7. The system of claim 5, wherein the node assignment sub-module is further to:
initializing a Numa node which is not called and allocated according to the Numa node view;
setting the distribution rate of the Numa nodes which are not called to be distributed as 0, and setting the use value as 0.001;
and calculating the allocation rate according to a preset formula after the Numa node is allocated and used by the Openstack resource pool.
8. The system of claim 5, wherein the Openstack resource pool filtering module filters rules as follows:
the kernel number of the virtual machine of the Numa node is less than the kernel number of the kernel threads.
9. A server, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform the operations corresponding to the resource pool scheduling method according to any one of claims 1 to 4.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the resource pool scheduling method of any one of claims 1-4.
CN201910792343.9A 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium Active CN112433841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910792343.9A CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910792343.9A CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Publications (2)

Publication Number Publication Date
CN112433841A CN112433841A (en) 2021-03-02
CN112433841B true CN112433841B (en) 2023-08-01

Family

ID=74690303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910792343.9A Active CN112433841B (en) 2019-08-26 2019-08-26 Resource pool scheduling method, system, server and storage medium

Country Status (1)

Country Link
CN (1) CN112433841B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201607439D0 (en) * 2016-04-28 2016-06-15 Metaswitch Networks Ltd Configuring host devices
CN108196958A (en) * 2017-12-29 2018-06-22 北京泽塔云科技股份有限公司 Scheduling of resource distribution method, computer system and super fusion architecture system
EP3382543A1 (en) * 2017-03-29 2018-10-03 Juniper Networks, Inc. Micro-level monitoring, visibility and control of shared resources internal to a processor of a host machine for a virtual environment
CN108694071A (en) * 2017-03-29 2018-10-23 瞻博网络公司 More cluster panels for distributed virtualization infrastructure elements monitoring and policy control
CN109885377A (en) * 2018-11-23 2019-06-14 ***股份有限公司 The method of unified resource scheduling coordinator and its creation virtual machine and/or container, unified resource dispatch system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201607439D0 (en) * 2016-04-28 2016-06-15 Metaswitch Networks Ltd Configuring host devices
EP3382543A1 (en) * 2017-03-29 2018-10-03 Juniper Networks, Inc. Micro-level monitoring, visibility and control of shared resources internal to a processor of a host machine for a virtual environment
CN108694071A (en) * 2017-03-29 2018-10-23 瞻博网络公司 More cluster panels for distributed virtualization infrastructure elements monitoring and policy control
CN108196958A (en) * 2017-12-29 2018-06-22 北京泽塔云科技股份有限公司 Scheduling of resource distribution method, computer system and super fusion architecture system
CN109885377A (en) * 2018-11-23 2019-06-14 ***股份有限公司 The method of unified resource scheduling coordinator and its creation virtual machine and/or container, unified resource dispatch system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NUMA 体系架构;lubanseven;《https://www.cnblogs.com/xingzheanan/p/10547387.html》;1-8 *
基于容器的大数据与虚拟化融合平台研究;祖立军;《信息技术与标准化》;第2019卷(第06期);27-30+50 *

Also Published As

Publication number Publication date
CN112433841A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN109684065B (en) Resource scheduling method, device and system
US10754704B2 (en) Cluster load balancing based on assessment of future loading
Cho et al. Natjam: Design and evaluation of eviction policies for supporting priorities and deadlines in mapreduce clusters
CN106371894B (en) Configuration method and device and data processing server
WO2016197716A1 (en) Task scheduling method and device
CN114741207B (en) GPU resource scheduling method and system based on multi-dimensional combination parallelism
JP2021521518A (en) Virtual machine scheduling method and equipment
CN109191287B (en) Block chain intelligent contract fragmentation method and device and electronic equipment
US20160182320A1 (en) Techniques to generate a graph model for cloud infrastructure elements
US20160232026A1 (en) Selecting a host for a virtual machine using a hardware multithreading parameter
CN111324427B (en) Task scheduling method and device based on DSP
US9184982B2 (en) Balancing the allocation of virtual machines in cloud systems
CN111880939A (en) Container dynamic migration method and device and electronic equipment
US20210200704A1 (en) Input/output command rebalancing in a virtualized computer system
US9069621B2 (en) Submitting operations to a shared resource based on busy-to-success ratios
CN107357640B (en) Request processing method and device for multi-thread database and electronic equipment
CN113626173B (en) Scheduling method, scheduling device and storage medium
CN114629960A (en) Resource scheduling method, device, system, device, medium, and program product
CN104679575A (en) Control system and control method for input and output flow
CN112433841B (en) Resource pool scheduling method, system, server and storage medium
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN109558214B (en) Host machine resource management method and device in heterogeneous environment and storage medium
CN111352710B (en) Process management method and device, computing equipment and storage medium
CN111353766A (en) Service process processing system and method of distributed service system
CN112395081A (en) Resource online automatic recovery method, system, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant