CN111651253B - Computing resource scheduling method and device - Google Patents

Computing resource scheduling method and device Download PDF

Info

Publication number
CN111651253B
CN111651253B CN202010471068.3A CN202010471068A CN111651253B CN 111651253 B CN111651253 B CN 111651253B CN 202010471068 A CN202010471068 A CN 202010471068A CN 111651253 B CN111651253 B CN 111651253B
Authority
CN
China
Prior art keywords
computing
node
nodes
task
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010471068.3A
Other languages
Chinese (zh)
Other versions
CN111651253A (en
Inventor
曹畅
唐雄燕
张帅
何涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202010471068.3A priority Critical patent/CN111651253B/en
Publication of CN111651253A publication Critical patent/CN111651253A/en
Application granted granted Critical
Publication of CN111651253B publication Critical patent/CN111651253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Abstract

The application provides a computing resource scheduling method and device, relates to the technical field of communication, and is used for solving the problem of overlarge load of an SDN (software defined network) controller. The method comprises the following steps: the SDN controller receives a first computing task sent by Customer Premise Equipment (CPE); the SDN controller determines N target core nodes according to the demand information of the first computing task, the network state of each core node in the M core nodes and total idle computing power resources which can be scheduled by each core node in the M core nodes; the SDN controller allocates second computing tasks to the N target core nodes through the CPE. The method and the device are applied to the process of scheduling the computing power resources.

Description

Computing resource scheduling method and device
Technical Field
The present application relates to the field of communications, and in particular, to a method and an apparatus for scheduling computational power resources.
Background
With the development of Artificial Intelligence (AI) technology, computing becomes more and more important in social production. With the large increase of computing demand, local computing is difficult to satisfy all the requirements of users, and many computing tasks require edge nodes (internal computing nodes) to provide computing resources for the local computing. However, when multiple internal compute nodes provide computing resources for a compute task, a software defined network SDN controller needs to schedule the computing resources.
In the prior art, a Software Defined Network (SDN) controller manages a plurality of internal computing nodes. The user terminal uploads the computing resources required by the computing task to the SDN controller through a Customer premises Equipment CPE (Customer Premise Equipment). The SDN controller then manages a plurality of internal compute nodes to provide computing resources for the computing task. However, when the SDN controller manages a plurality of internal computing nodes to provide computing resources for a computing task, the SDN controller not only needs to allocate computing subtasks to the internal computing nodes according to the computing resources of each internal computing node, but also needs to compute more routes between the CPE and the internal computing nodes, which greatly increases the load of the SDN controller.
Disclosure of Invention
The application provides a computing resource scheduling method and device, which are used for solving the problem of overlarge load of an SDN controller.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method for scheduling computing resources, where the method includes:
the SDN controller receives demand information of a first calculation task sent by Customer Premise Equipment (CPE); the SDN controller determines N target core nodes according to the requirement information of the first computing task, the network state of each core node in the M core nodes and the free computing power resource which can be scheduled by each core node in the M core nodes, wherein the target core nodes are used for scheduling the computing power resource required by the first computing task, and N is a positive integer less than or equal to M; the SDN controller allocates second computing tasks to the N target core nodes through the CPE, wherein the second computing tasks are part of the first computing tasks.
Based on the technical scheme, the SDN controller determines the target core node according to the demand information of the calculation task, the network state of the core node and the free computing power which can be scheduled by the core node, so that the target core node can meet the task demand, and the target core node can be guaranteed to complete the calculation task. Because the core node manages a plurality of internal computing nodes, the computing resources which can be scheduled by the core node are far larger than that of one internal computing node, and therefore, the number of the core nodes required for completing one computing task is less than that of the internal computing nodes. Because the number of the core nodes is less, and the SDN controller only manages the core nodes, the invention not only reduces the number of management nodes of the SDN controller, but also reduces the load of the SDN controller because the SDN controller only needs to calculate a small number of routes between the CPE and the target core nodes. And then, the SDN controller distributes the calculation task to the target core node through the CPE, and the target core node is responsible for scheduling the calculation resources required by the calculation task, so that the core node can share the task of the SDN controller, and the load of the SDN controller is effectively reduced.
In a second aspect, the present application provides an apparatus for scheduling computing resources, the apparatus comprising:
the receiving unit is used for receiving a first computing task sent by Customer Premise Equipment (CPE); the processing unit is used for determining N target core nodes according to the requirement information of the first computing task, the network state of each core node in the M core nodes and total idle computing power resources which can be scheduled by each core node in the M core nodes, wherein the target core nodes are used for scheduling the computing power resources required by the first computing task, the total idle computing power resources are the sum of the idle computing power resources of all the computing nodes managed by one core node, and N is a positive integer less than or equal to M; a sending unit, configured to allocate a second computation task to the N target core nodes through the CPE, where the second computation task is a part of the first computation task.
In one possible design, the requirement information for the first computing task includes: network delay, network jitter, network packet loss rate, and computational resources.
In one possible design, the processing unit is further to determine, for each of the N target core nodes, a route between the target core node and the CPE by the SDN controller.
In one possible design, the route between the target core node and the CPE includes a first route and a second route.
In a third aspect, the present application provides an apparatus for scheduling computing resources, the apparatus comprising: a processor and a communication interface; the communication interface is coupled to a processor for executing a computer program or instructions for implementing the method for scheduling computational resources as described in the first aspect and any one of the possible implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to perform the method for scheduling computational resources described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions that, when run on a computer, cause the computer to perform the method for scheduling computational resources as described in the first aspect and any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a computer program or instructions to implement the XX method described in the first aspect and any one of the possible implementations of the first aspect.
In the foregoing solution, the technical problems that can be solved by the scheduling apparatus of computational resources, the computer device, the computer storage medium, the computer program product, or the chip, and the technical effects that can be achieved may refer to the technical problems and the technical effects that are solved by the first aspect, and are not described herein again.
Drawings
Fig. 1 is a system architecture diagram of a communication system according to an embodiment of the present application;
fig. 2 is a system architecture diagram of a communication system according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a scheduling method of computing resources according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another method for scheduling computational resources according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for determining a target external computing node according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a scheduling apparatus for computing resources according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another scheduling apparatus for computing resources according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. For example, A/B may be understood as either A or B.
The terms "first" and "second" in the description and claims of the present application are used to distinguish between different objects, and are not used to describe a particular order of objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules recited, but may alternatively include other steps or modules not recited, or may alternatively include other steps or modules inherent to such process, method, article, or apparatus.
In addition, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "e.g.," is intended to present concepts in a concrete fashion.
In order to facilitate understanding of the technical solutions of the present application, a brief introduction will be made below on some concepts related to the embodiments of the present application.
1. Computing power
The computing power has different measurement units in different application scenes, is used for the hash operation times per second (H/S) of the bitcoin and the floating point operation times per second (FLOP/S) of AI and graphic processing, and the appeal of the intelligent society to the computing power is mainly the floating point operation capability. Bitcoin networks must perform intensive mathematical and encryption related operations for security purposes. For example, when the network reaches a hash rate of 10Th/s, it means that it can perform 10 trillion calculations per second.
2. Network node
A network node refers to a computer or other device connected to a network having an independent address and having a function of transmitting or receiving data. The nodes may be workstations, clients, network users or personal computers, servers, printers and other network-connected devices. Each workstation, server, terminal device, network device, i.e. the device having its own unique network address, is a network node.
3. Mobile Edge Computing (MEC)
The MEC can utilize the wireless access network to provide services and cloud computing functions needed by IT of telecommunication users nearby, create a telecommunication service environment with high performance, low delay and high bandwidth, accelerate the rapid downloading of various contents, services and applications in the network, and enable the users to enjoy uninterrupted high-quality network experience.
The above is an introduction to the terminology referred to in the present application and will not be described in detail below. The following describes an example environment for implementing embodiments of the present application.
Fig. 1 is a schematic diagram of an architecture of a communication system according to an embodiment of the present disclosure. The communication system includes: a terminal, a CPE, an SDN controller, a core node, and an internal compute node.
The terminal may be various handheld devices, vehicle-mounted devices, wearable devices, and computers with communication functions, which is not limited in this embodiment of the present application. For example, the handheld device may be a smartphone. The in-vehicle device may be an in-vehicle navigation system. The wearable device may be a smart band. The computer may be a Personal Digital Assistant (PDA) computer, a tablet computer, and a laptop computer.
The CPE is mobile signal access equipment for receiving mobile signals and forwarding the mobile signals by wireless WIFI signals, is equipment for converting high-speed 4G or 5G signals into WiFi signals, and can support a large number of terminals for surfing the internet at the same time.
An SDN controller is an application in a software defined network that is responsible for traffic control to ensure an intelligent network. SDN controllers are based on protocols such as OpenFlow, which allow servers to send packets through switches. The core technology OpenFlow separates the control plane and the data plane of the network equipment, so that the flexible control of the network flow is realized, the network becomes more intelligent as a pipeline, and a good platform is provided for the innovation of a core network and application.
An SDN controller may manage M core nodes, and one core node may manage L internal compute nodes. Taking fig. 1 as an example, the SDN controller manages a core node a, a core node B, and a core node C, and the core node a manages an internal computing node Aa, an internal computing node Ab, and an internal computing node Ac.
In the embodiment of the present application, the core node is a network node, the internal computing node is a network node having an MEC function, and an operator to which the core node belongs is the same as an operator to which the internal computing node belongs.
Optionally, the communication system may further comprise an external computing node. Fig. 2 is a schematic structural diagram of another communication system according to an embodiment of the present application. The communication system includes: a terminal, a CPE, an SDN controller, a core node, an internal compute node, and an external compute node.
In the embodiment of the present application, the external computing node is a network node having an MEC function, and an operator to which the core node and the internal computing node belong is different from an operator to which the external computing node belongs.
It should be noted that, when the computational power resources that can be scheduled by the core node cannot satisfy the computational power resources required by the computational task, the core node may provide the computational power resources for the computational task by using an external computational node. For example, core node a may request external computing nodes Ad to provide computational resources for a computational task.
The technical scheme provided by the embodiment of the application can be applied to the communication system. The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it can be known by a person of ordinary skill in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems with the evolution of the network architecture and the occurrence of a new service scenario.
As shown in fig. 3, a method for scheduling computing resources provided in an embodiment of the present application includes the following steps:
s101, the SDN controller receives a first calculation task sent by the CPE.
It should be noted that the first computing task is a task that needs to call a computing resource to be implemented. The first computing task consists of data and algorithms. For example, the first computing task may be face recognition.
Optionally, after the SDN controller receives the first calculation task sent by the CPE, the SDN controller analyzes the first calculation task to obtain information of the first calculation task.
The information of the first computing task includes a task ID (Identity Document) and requirement information. The demand information includes: network delay requirements, network jitter requirements, network packet loss rate requirements, and computational resource requirements.
It should be noted that the network delay requirement is a requirement of the computing task on the network delay. Network jitter requirements are used to indicate the requirements of a computing task for network jitter. The network packet loss rate requirement is the requirement of the computing task on the network packet loss rate. Computing resource requirements are the requirements of a computing task for computing resources. For example, when a face recognition task is implemented, the network delay requirement is E2E 5ms, the network jitter requirement is that the network jitter is less than 1ms, the network packet loss rate requirement is that the network packet loss rate is less than 0.0001, and the computational resource requirement is 100TFLOPS.
S102, the SDN controller determines N target core nodes according to the requirement information of the first computing task, the network state of each core node in the M core nodes and total idle computing power resources which can be scheduled by each core node in the M core nodes.
Wherein the network state of the core node comprises: network delay, network jitter, and network packet loss rate. The total idle computing power resource is the sum of the idle computing power resources of all the internal computing nodes managed by one core node. The target core node is responsible for scheduling computational resources required for the first computational task. N is a positive integer less than or equal to M.
Optionally, before the SDN controller determines the N target core nodes, each target core node in the N target core nodes sends information of the target core node to the SDN controller. The information of the target core node includes: network status, total idle computational power resources, and network address.
According to a possible implementation manner, the controller determines Q core nodes from the M core nodes according to a network delay requirement, a network jitter requirement, a network packet loss rate requirement of the first calculation task, and a network state of each of the M core nodes. The network states of the Q core nodes all meet the network delay requirement, the network jitter requirement and the network packet loss rate requirement of the first computing task. The controller determines one or more target core nodes from the Q core nodes according to the computational resource requirements of the first computational task and the total idle computational resource that can be scheduled by each of the M core nodes.
In one possible design, if a core node with a total idle computational power resource greater than or equal to the computational power resource requirement of the first computational task exists in the Q core nodes, the controller determines any core node in the core nodes with the total idle computational power resource greater than or equal to the computational power resource requirement of the first computational task as a target core node. That is, the computational resources required for the first computational task are scheduled by one target core node.
In another possible design, if the total idle computational power resource of each of the Q core nodes is less than the computational power resource requirement of the first computational task, the controller determines a plurality of target core nodes, and the sum of the total idle computational power resources of the plurality of target core nodes is greater than or equal to the computational power resource requirement of the first computational task.
Optionally, if the controller determines a plurality of target core nodes, the controller splits the first computation task and determines a plurality of second computation tasks. And the second computing task is a part of the first computing task, and computing power resources required by the second computing task are scheduled by the target core node to which the second computing task is distributed.
Illustratively, the computational resource requirement of the first computing task is 750TFLOPS. The controller divides the first computing task into a second computing task a and a second computing task b, wherein the computing resource requirement of the second computing task a is 300TFLOPS, and the computing resource requirement of the second computing task b is 450
TFLOPS。
And S103, the SDN controller distributes second computing tasks to the N target core nodes through the CPE.
According to a possible implementation manner, the SDN controller allocates, to a target core node, a second computation task corresponding to a task ID through the CPE according to the task ID of the second computation task and a network address of the target core node to which the second computation task is allocated.
Optionally, before the SDN controller allocates the second computational task to the N target core nodes, for each of the N target core nodes, the SDN controller determines a route between the target core node and the CPE. It should be noted that the route between the target core node and the CPE includes a first route and a second route.
In one possible design, the first route is an optimal path between the target core node and the CPE, and the second route is a suboptimal path between the target core node and the CPE.
It should be noted that the controller may determine the optimal path and the suboptimal path between the target core node and the CPE according to factors such as network jitter, network packet loss rate, network delay, and the like.
It will be appreciated that the controller determines a first route and a second route between the target core node and the CPE and sends the first route and the second route to the CPE. In this way, when the network state of the first route is unstable, the CPE can select the second route, which improves the robustness of the network. And the CPE selects the optimal path for data transmission, so that the stability of the network can be improved.
Based on the technical scheme shown in fig. 3, the SDN controller determines the target core node according to the requirement information of the calculation task, the network state of the core node and the idle calculation power that the core node can schedule, and ensures that the target core node can meet the task requirement, thereby ensuring that the target core node can complete the calculation task. Because the core node manages a plurality of internal computing nodes, the computing resources which can be scheduled by the core node are far larger than that of one internal computing node, and therefore, the number of the core nodes required for completing one computing task is less than that of the internal computing nodes. Because the number of the core nodes is less, and the SDN controller only manages the core nodes, the method not only reduces the number of the management nodes of the SDN controller, but also reduces the load of the SDN controller because the SDN controller only needs to calculate a small amount of routes between the CPE and the target core nodes. And then, the SDN controller distributes the calculation tasks to the target core nodes through the CPE, and the target core nodes are responsible for scheduling calculation resources required by the calculation tasks, so that the core nodes can share the tasks of the SDN controller, and the load of the SDN controller is effectively reduced.
After the SDN controller allocates the second computing tasks to the N target core nodes through the CPE, the target core nodes receive the second computing tasks and schedule computing resources required by the second computing tasks.
The following describes a process of scheduling computational power resources for the second computational task by the target core node. As shown in fig. 4, a method for scheduling computing resources provided in an embodiment of the present application includes the following steps:
s201, the target core node receives a second computing task sent by the SDN controller through the CPE.
Wherein the second computing task is part of the first computing task.
S202, the target core node sends first request messages to L internal computing nodes managed by the target core node respectively.
The first request message is used for instructing each internal computing node in the L internal computing nodes to send idle computing resources of the internal computing node to the target core node.
In one possible design, the first request message includes requirement information for the second computing task. Optionally, the first request message may further include one or more of a message identification, a task identification of the second computing task, a reserved field, and a check bit.
It should be noted that, in practical applications of the embodiment of the present application, the first request message may have different names, such as internal force certificate, and the like, which is not limited in the embodiment of the present application.
S203, the target core node receives a first response message sent by each internal computing node in the L internal computing nodes.
Wherein the first response message includes the free computing power resources and the network address of the internal computing node. Optionally, the first response message may further include one or more of a message identifier, a task identifier of the second computing task, requirement information of the second computing task, a reserved field, and a check bit.
S204, the target core node judges whether the total idle computing power resource is larger than or equal to the computing power resource requirement of the second computing task.
Optionally, before the target core node determines whether the total idle computation power resource is greater than or equal to the computation power resource requirement of the second computation task, the target core node determines the total idle computation power resource according to the idle computation power resource of each internal computation node.
In one possible design, if the total idle computing power resource is greater than or equal to the computing power resource requirement of the second computing task, the target core node performs step S205.
In another possible design, if the total idle computing power resource is smaller than the computing power resource requirement of the second computing task, the target core node performs step S206.
S205, the target core node determines P target internal computing nodes providing computing power resources for the second computing task.
And the sum of the idle calculation power resources of the P target internal calculation nodes is greater than or equal to the calculation power resource P required by the second calculation task, and is a positive integer less than L.
In one possible implementation manner, if the sum of the idle computing power resources of the L internal computing nodes is greater than or equal to the computing power resource required by the second computing task, the target core node determines P target internal computing nodes that provide the computing power resource for the second computing task.
Optionally, the target core node splits the second computing task, and determines P third computing tasks. Wherein the third computing task is part of the second computing task, and the computational power resources required by the third computing task are provided by the target internal computing node to which the third computing task is assigned. And the computing power resource required by the third computing task is less than or equal to the idle computing power resource of the target internal computing node to which the third computing task is distributed.
Optionally, the target core node allocates, to the internal computing node, the third computing task corresponding to the task identifier according to the task identifier of the third computing task and the network address of the internal computing node to which the third computing task is allocated.
S206, the target core node determines a target external computing node or a target internal computing node and a target external computing node which provide computing power resources for the second computing task.
It can be understood that, after the target core node receives the second computing task, the total idle computing power resources that can be scheduled by the target core node are changed, so that the sum of the idle computing power resources of the L internal computing nodes managed by the target core node is smaller than the computing power resources required by the second computing task, and the target core node cannot schedule sufficient computing power resources for the second computing task. Therefore, the target core node needs to provide computational resources for the second computational task with the help of the external computational nodes.
In one possible design, if there is a target internal computing node that provides computing power resources for the second computing task in the L internal computing nodes, and the sum of the idle computing powers provided by all the target internal computing nodes is less than the computing power resources required by the second computing task, the controller determines that the target internal computing node and the target external computing node together provide computing power resources for the second computing task.
In another possible design, if all internal computing nodes cannot provide computing resources for the second computing task, the target core node determines that the target external computing node provides computing resources for the second computing task.
Based on the technical scheme shown in fig. 4, the target core node receives the second computing task, and determines whether the total idle computing power resource of the target core node is greater than or equal to the computing power resource required by the second computing task, and further determines a target internal computing node and/or a target external node which provide the computing power resource for the second computing task, shares the task of the SDN controller, avoids the target internal computing node and/or the target external node which are determined by the SDN controller to provide the computing power resource for the computing task, and effectively reduces the load of the SDN controller.
The following describes the implementation of step S206 in detail. As shown in fig. 5, a method for determining a target external computing node according to an embodiment of the present application includes the following steps:
s301, the target core node sends second request messages to the Z external computing nodes respectively.
Wherein the second request message includes the computing power bid and the computing power resource requirements of the third computing task. The second request message is used for instructing each external computing node in the Z external computing nodes to send the computing power return price of the external computing node to the target core node, and Z is a positive integer.
Optionally, the second request message may further include one or more of a message identification, a task identification of the third computing task, a reserved field, and a check bit.
It should be noted that, in practical applications of the embodiment of the present application, the second request message may have different names, for example, an external force certificate, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, the computational power bid is used for indicating that the target core node can accept the cost paid by the kth external computing node. The effort reward value is used to indicate the cost that the external computing node needs to pay to provide the effort resource.
In one possible design, the computational power return price is determined by equation one:
a = B × C formula one
Wherein, A is the computing power return price of the external computing node, B is the computing power resource requirement of the third computing task, and C is the service unit price of the external computing node.
Illustratively, the computing resource requirement of the computing task is 50TFLOPS, the service unit price of the external computing node is 1 yuan/TFLOPS, and the computing power return price of the external computing node is 50 yuan.
In one possible design, the computational offer is determined by equation two:
d = B × F formula two
And D is the calculation power bid, B is the calculation power resource requirement of the third calculation task, and F is the service unit price which can be accepted by the target core node.
S302, the target core node receives a second response message sent by each external node in the Z external computing nodes.
Wherein the second response message includes the effort rebate for the external computing node.
Optionally, the second response message may further include one or more of a message identification, a task identification of the third computing task, a computing power bid, a computing power resource requirement of the third computing task, a reserved field, and a check bit.
And S303, if the external computing nodes with the computing power return price less than or equal to the computing power bid price exist in the Z external computing nodes, determining the target external computing node providing computing power resources for the second computing task by the target core node.
In one possible implementation manner, if there are external computing nodes with the power return price less than or equal to the power bid price in the Z external computing nodes, the target core node determines the external computing node with the lowest power return price as the target external computing node.
It can be understood that the operator preferentially selects the external computing nodes with lower computing power return price to provide computing power resources for the computing task, so that the cost for realizing the computing task can be reduced.
Based on the technical scheme shown in fig. 5, the target core node can determine the target external computing node according to the computing power bid and the computing power return price of the external computing node, share the task of the SDN controller, avoid determining the target external computing node providing computing power resources for the computing task by the SDN controller, and effectively reduce the load of the SDN controller.
The foregoing describes the solution provided by an embodiment of the present application, primarily from the perspective of a computer device. It will be appreciated that the computer device, in order to implement the above-described functions, comprises corresponding hardware structures and/or software modules for performing the respective functions. Those skilled in the art will readily appreciate that the exemplary computational resource scheduling methods described in connection with the embodiments disclosed herein can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional modules or functional units may be divided according to the method example described above, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
As shown in fig. 6, an embodiment of the present invention provides a scheduling apparatus for computing resources, including:
a receiving unit 101, configured to receive a first computing task sent by a customer premises equipment CPE;
a processing unit 102, configured to determine N target core nodes according to requirement information of the first computation task, a network state of each of M core nodes, and a total idle computation power resource that can be scheduled by each of the M core nodes, where the target core node is used to schedule the computation power resource required by the first computation task, the total idle computation power resource is a sum of idle computation power resources of all computation nodes managed by one core node, and N is a positive integer less than or equal to M;
a sending unit 103, configured to allocate a second computation task to the N target core nodes through the CPE, where the second computation task is a part of the first computation task.
Optionally, the requirement information of the first computing task includes: network delay, network jitter, network packet loss rate, and computational resources.
Optionally, the processing unit is further configured to, for each target core node of the N target core nodes, determine a route between the target core node and the CPE by the SDN controller.
Optionally, the route between the target core node and the CPE includes a first route and a second route.
Fig. 7 shows still another possible structure of the scheduling apparatus of computational resources involved in the above-described embodiment. The scheduling device of the computing power resource comprises: a processor 201 and a communication interface 202. The processor 201 is used to control and manage the actions of the device, for example, to perform various steps in the method flows shown in the method embodiments described above, and/or to perform other processes for the techniques described herein. The communication interface 202 is used to support the scheduling apparatus of the computing resource to communicate with other network entities. The scheduling device of computational resources may further comprise a memory 203 and a bus 204, the memory 203 being used for storing program codes and data of the device.
The processor 201 may implement or execute various exemplary logical blocks, units and circuits described in connection with the present disclosure. The processor may be a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, units, and circuits described in connection with the present disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like.
Memory 203 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
The bus 204 may be an Extended Industry Standard Architecture (EISA) bus or the like. The bus 204 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
Through the description of the foregoing embodiments, it will be clear to those skilled in the art that, for convenience and simplicity of description, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
The present application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the port classification method in the above method embodiments.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a computer, the computer is caused to execute the port classification method in the method flow shown in the foregoing method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a register, a hard disk, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, any suitable combination of the above, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the scheduling apparatus of computational resources, the computer-readable storage medium, and the computer program product in the embodiments of the present invention may be applied to the method described above, the technical effect obtained by the scheduling apparatus of computational resources may also refer to the method embodiments described above, and the details of the embodiments of the present invention are not repeated herein.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A scheduling method of computing power resources is applied to a Software Defined Network (SDN) controller, wherein the SDN controller is used for managing M core nodes, the core nodes are used for managing one or more internal computing nodes, the internal computing nodes are used for providing computing power resources, and M is a positive integer; the method comprises the following steps:
the SDN controller receives a first computing task sent by Customer Premise Equipment (CPE);
the SDN controller determines N target core nodes according to the demand information of the first computing task, the network state of each core node in the M core nodes and total idle computing power resources which can be scheduled by each core node in the M core nodes, wherein the target core nodes are used for scheduling computing power resources required by the first computing task, the total idle computing power resources are the sum of the idle computing power resources of all computing nodes managed by one core node, and N is a positive integer less than or equal to M;
the SDN controller allocates a second computing task to the N target core nodes through the CPE, wherein the second computing task is a part of the first computing task.
2. The method of scheduling computing resources of claim 1, wherein the demand information for the first computing task comprises: network delay, network jitter, network packet loss rate, and computational resources.
3. The method of scheduling computational resources according to claim 1 or 2, wherein the method further comprises:
for each of the N target core nodes, the SDN controller determines a route between the target core node and the CPE.
4. The method of scheduling computing resources of claim 3, wherein the route between the target core node and the CPE comprises a first route and a second route.
5. The method of scheduling computing resources of claim 1, further comprising:
the target core node receives the second computing task sent by the CPE;
the target core node determines a target internal compute node and/or a target external compute node that provide computational power resources for a second compute task, wherein the target internal compute node is one of the internal compute nodes and the target external compute node is one of the external compute nodes.
6. The method of scheduling computing resources of claim 5, further comprising:
the target core node sends first request messages to L internal computing nodes managed by the target core node respectively, wherein the first request messages are used for indicating each internal computing node in the L internal computing nodes to send idle computing power resources of the internal computing node to the target core node;
the target core node receives a first response message sent by the internal computing node, wherein the first response message comprises idle computing resources of the internal computing node.
7. The method according to claim 6, wherein the determining, by the target core node, a target internal computing node and/or a target external computing node that provide computing resources for the second computing task comprises:
if the sum of the idle computing power resources of the L internal computing nodes is greater than or equal to the computing power resource required by the second computing task, the target core node determines P target internal computing nodes providing computing power resources for the second computing task, the P target internal computing nodes belong to the subset of the L internal computing nodes, the sum of the idle computing power resources of the P target internal computing nodes is greater than or equal to the computing power resource required by the second computing task, and P is a positive integer smaller than L;
if the sum of the idle computing power resources of the L internal computing nodes is less than the computing power resource required by the second computing task, the target core node determines the target external computing node or the target internal computing node and the target external computing node which provide the computing power resource for the second computing task.
8. The method of claim 7, wherein if the sum of the idle computing resources of the L internal computing nodes is less than the computing resources required by the second computing task, the method further comprises:
the target core node sends a second request message to each external computing node in Z external computing nodes respectively, the second request message is used for indicating the external computing nodes to send computing power return prices of the external computing nodes to the target core node, the computing power return prices are used for indicating the external computing nodes to provide costs required to be paid by computing power resources, and Z is a positive integer;
the target core node receives a second response message sent by the external computing node, wherein the second response message comprises the computing power return price of the external computing node;
and if the computing power return price of the external computing node is less than or equal to the computing power bid price, the target core node determines that the external computing node is the target external computing node, and the computing power bid price is used for indicating that the target core node can accept the cost paid by the external computing node.
9. An apparatus for scheduling computational resources, the apparatus comprising:
the receiving unit is used for receiving a first computing task sent by Customer Premise Equipment (CPE);
the processing unit is used for determining N target core nodes according to the requirement information of the first computing task, the network state of each core node in the M core nodes and total idle computing power resources which can be scheduled by each core node in the M core nodes, wherein the target core nodes are used for scheduling the computing power resources required by the first computing task, the total idle computing power resources are the sum of the idle computing power resources of all the computing nodes managed by one core node, and N is a positive integer less than or equal to M;
a sending unit, configured to allocate a second computation task to the N target core nodes through the CPE, where the second computation task is a part of the first computation task.
10. The apparatus of claim 9, wherein the demand information for the first computing task comprises: network delay, network jitter, network packet loss rate, and computational resources.
11. The computing resource scheduling apparatus according to claim 10,
the processing unit is further configured to, for each of the N target core nodes, an SDN controller determine a route between the target core node and the CPE.
12. The apparatus according to claim 11, wherein the route between the target core node and the CPE comprises a first route and a second route.
13. A server, comprising: a processor, a memory, and a communication interface; wherein, the communication interface is used for the server to communicate with other devices or networks; the memory is used for storing one or more programs, the one or more programs comprising computer executable instructions, which when executed by the processor, stored by the memory, cause the server to perform the method of scheduling computational resources of any of claims 1-8.
14. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a computer, cause the computer to perform the method of scheduling computational resources of any of claims 1-8.
CN202010471068.3A 2020-05-28 2020-05-28 Computing resource scheduling method and device Active CN111651253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010471068.3A CN111651253B (en) 2020-05-28 2020-05-28 Computing resource scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010471068.3A CN111651253B (en) 2020-05-28 2020-05-28 Computing resource scheduling method and device

Publications (2)

Publication Number Publication Date
CN111651253A CN111651253A (en) 2020-09-11
CN111651253B true CN111651253B (en) 2023-03-14

Family

ID=72346932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010471068.3A Active CN111651253B (en) 2020-05-28 2020-05-28 Computing resource scheduling method and device

Country Status (1)

Country Link
CN (1) CN111651253B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153153B (en) * 2020-09-28 2023-01-20 北京大学深圳研究生院 Coordinated distributed intra-network resource scheduling method and system and storage medium
CN116324723A (en) * 2020-10-10 2023-06-23 瑞典爱立信有限公司 Method and apparatus for managing load of network node
CN114489963A (en) * 2020-11-12 2022-05-13 华为云计算技术有限公司 Management method, system, equipment and storage medium of artificial intelligence application task
CN114500521A (en) * 2020-11-13 2022-05-13 ***通信有限公司研究院 Computing power scheduling method, device, scheduling equipment, system and storage medium
CN112465359B (en) * 2020-12-01 2024-03-15 中国联合网络通信集团有限公司 Calculation force calling method and device
CN112710915B (en) * 2020-12-18 2024-02-20 北京百度网讯科技有限公司 Method, device, electronic equipment and computer storage medium for monitoring power equipment
CN114691352A (en) * 2020-12-31 2022-07-01 维沃移动通信有限公司 Information processing method, device, equipment and storage medium
CN114691349A (en) * 2020-12-31 2022-07-01 维沃移动通信有限公司 Information processing method, device, equipment and storage medium
WO2022174675A1 (en) * 2021-02-22 2022-08-25 华为技术有限公司 Computing power information processing method, first network device, and system
CN113037819B (en) * 2021-02-26 2022-09-23 杭州雾联科技有限公司 Edge computing resource sharing method, device and equipment
CN113157444B (en) * 2021-03-29 2023-12-01 中国联合网络通信集团有限公司 Method and device for authenticating computing power service and readable storage medium
CN113296905B (en) * 2021-03-30 2023-12-26 阿里巴巴新加坡控股有限公司 Scheduling method, scheduling device, electronic equipment, storage medium and software product
CN113852950B (en) * 2021-06-28 2023-06-09 山东浪潮科学研究院有限公司 Intelligent mobility tracking scheduling method and device for computing network
CN113641124B (en) * 2021-08-06 2023-03-10 珠海格力电器股份有限公司 Calculation force distribution method and device, controller and building control system
CN113867973B (en) * 2021-12-06 2022-02-25 腾讯科技(深圳)有限公司 Resource allocation method and device
WO2023142091A1 (en) * 2022-01-29 2023-08-03 华为技术有限公司 Computing task scheduling apparatus, computing apparatus, computing task scheduling method and computing method
CN116709553A (en) * 2022-02-24 2023-09-05 华为技术有限公司 Task execution method and related device
CN114785851B (en) * 2022-04-20 2024-01-09 中国电信股份有限公司 Resource call processing method and device, storage medium and electronic equipment
CN114700957B (en) * 2022-05-26 2022-08-26 北京云迹科技股份有限公司 Robot control method and device with low computational power requirement of model
CN115426327B (en) * 2022-11-04 2023-01-13 北京邮电大学 Calculation force scheduling method and device, electronic equipment and storage medium
CN116739202B (en) * 2023-08-15 2024-01-23 深圳华越南方电子技术有限公司 Power routing method, system, equipment and storage medium
CN117434990B (en) * 2023-12-20 2024-03-19 成都易联易通科技有限责任公司 Granary environment control method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273179A (en) * 2016-04-07 2017-10-20 ***通信有限公司研究院 The dispatching method and device of a kind of hardware resource
CN107404733A (en) * 2017-08-22 2017-11-28 山东省计算中心(国家超级计算济南中心) A kind of 5G method of mobile communication and system based on MEC and layering SDN
CN105027512B (en) * 2014-01-29 2018-05-18 华为技术有限公司 Data transmission method, transfer control method and equipment
CN108513655A (en) * 2015-10-13 2018-09-07 施耐德电器工业公司 Software definition automated system and its framework
CN110012508A (en) * 2019-04-10 2019-07-12 中南大学 A kind of resource allocation methods and system of the edge calculations towards super-intensive network
CN110515731A (en) * 2019-08-22 2019-11-29 北京浪潮数据技术有限公司 A kind of data processing method, apparatus and system
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582584B2 (en) * 2005-10-04 2013-11-12 Time Warner Cable Enterprises Llc Self-monitoring and optimizing network apparatus and methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105027512B (en) * 2014-01-29 2018-05-18 华为技术有限公司 Data transmission method, transfer control method and equipment
CN108513655A (en) * 2015-10-13 2018-09-07 施耐德电器工业公司 Software definition automated system and its framework
CN107273179A (en) * 2016-04-07 2017-10-20 ***通信有限公司研究院 The dispatching method and device of a kind of hardware resource
CN107404733A (en) * 2017-08-22 2017-11-28 山东省计算中心(国家超级计算济南中心) A kind of 5G method of mobile communication and system based on MEC and layering SDN
CN110012508A (en) * 2019-04-10 2019-07-12 中南大学 A kind of resource allocation methods and system of the edge calculations towards super-intensive network
CN110515731A (en) * 2019-08-22 2019-11-29 北京浪潮数据技术有限公司 A kind of data processing method, apparatus and system
CN110891093A (en) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 Method and system for selecting edge computing node in delay sensitive network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Optimal Task Offloading and Resource Allocation;Sukjin Choo 等;《2018 International Conference on Information and Communication Technology Convergence》;IEEE;251-256 *
***算力网络***;唐雄燕 等;《***》;1-22 *
运营商边缘计算网络技术***;雷波 等;《边缘计算产业联盟(ECC)与网络5.0 产业和技术创新联盟(N5A)》;1-39 *

Also Published As

Publication number Publication date
CN111651253A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111651253B (en) Computing resource scheduling method and device
US9621425B2 (en) Method and system to allocate bandwidth for heterogeneous bandwidth request in cloud computing networks
Quang et al. Single and multi-domain adaptive allocation algorithms for VNF forwarding graph embedding
Hoang et al. Optimal admission control policy for mobile cloud computing hotspot with cloudlet
Wang et al. Virtual machine placement and workload assignment for mobile edge computing
US8938541B2 (en) Method and system to allocate bandwidth in cloud computing networks
CN107948271B (en) Method for determining message to be pushed, server and computing node
CN105723656A (en) Service policies for communication sessions
CN112188548B (en) Service processing method and device
CN114286413A (en) TSN network combined routing and stream distribution method and related equipment
US20210337452A1 (en) Sharing geographically concentrated workload among neighboring mec hosts of multiple carriers
US11961101B2 (en) System and method for offering network slice as a service
CN112749002A (en) Method and device for dynamically managing cluster resources
CN114168351A (en) Resource scheduling method and device, electronic equipment and storage medium
CN110708678B (en) Communication method and device
CN112714146B (en) Resource scheduling method, device, equipment and computer readable storage medium
CN114675960A (en) Computing resource allocation method and device and nonvolatile storage medium
Hung et al. A new technique for optimizing resource allocation and data distribution in mobile cloud computing
CN113453285B (en) Resource adjusting method, device and storage medium
CN114124825A (en) Data transmission method, system, device and storage medium
Wang et al. Resource allocation for edge computing over fibre‐wireless access networks
US10812325B1 (en) Service bandwidth provisioning on passive optical networks
Chiang et al. Study of adaptive dynamic replication mechanism in mobile edge computing environment
CN115243080B (en) Data processing method, device, equipment and storage medium
Thananjeyan et al. Optimum selection of mobile edge computing hosts based on extended balas-geoffrion additive algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant