CN114500405A - Resource allocation and acquisition method and device for multi-type service application - Google Patents

Resource allocation and acquisition method and device for multi-type service application Download PDF

Info

Publication number
CN114500405A
CN114500405A CN202111613817.2A CN202111613817A CN114500405A CN 114500405 A CN114500405 A CN 114500405A CN 202111613817 A CN202111613817 A CN 202111613817A CN 114500405 A CN114500405 A CN 114500405A
Authority
CN
China
Prior art keywords
computing
resource
calculation
resource allocation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111613817.2A
Other languages
Chinese (zh)
Inventor
王雪晴
***
王宏来
刘辛
李大鹏
王宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111613817.2A priority Critical patent/CN114500405A/en
Publication of CN114500405A publication Critical patent/CN114500405A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/122Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/30Routing of multiclass traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling

Abstract

The invention discloses a resource allocation and acquisition method and a device for multi-type service application, wherein the resource allocation method comprises the following steps: acquiring resource request information of a computing task and network resource information in current networking; extracting the calculation data volume and the delay tolerance of the calculation application from the resource request information; sending a computing resource information request to a data center based on the computing data volume and the delay tolerance; acquiring a computing resource information result responded by the data center; obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information; and sending the calculation resource allocation result to an application manager corresponding to the calculation task so as to allocate the resource to the application manager based on the calculation resource allocation result.

Description

Resource allocation and acquisition method and device for multi-type service application
Technical Field
The invention relates to the technical field of internet communication computing resource allocation, in particular to a resource allocation and acquisition method and device for multi-type service application.
Background
In the internet of things and the 5G era, more and more applications pursue diversified services to realize differentiated user experience. In different application scenarios, the requirements on the processing speed of the service or data are different. In addition, other applications such as intelligent manufacturing, intelligent cities, and Intelligent Transportation Systems (ITS) may also have diverse computing needs. The diverse computing demands have led to the rapid development of various computing modes, such as edge computing, cloud computing, and fog computing.
In cloud computing, a large amount of computing resources are placed in a cloud, an application can use resources on the cloud according to computing demand, edge computing generally deploys an edge data center at a position closer to a user, and ultra-low computing delay can be realized compared with cloud computing. The edge computing can reduce end-to-end time delay, reduce network load of remote cloud service, realize real-time and more efficient data processing and become powerful supplement of cloud computing because computing tasks can be placed at edge nodes for processing. To handle the diverse computing application requirements that are delay tolerant from delay sensitivity, an edge DC (Data Center) may be used in a coordinated complement with the cloud DC. The purpose of this coordination is to optimize the delay distribution of different applications, i.e. delay sensitive computation tasks are completed within their time limit; for a delay tolerant computing task, since IT generally occupies a large amount of IT resources (such as CPUs, memories, disks, and the like) and network resources such as bandwidths, IT can be scheduled to a cloud computing for the next computing. Generally, reducing the network transmission time can improve the application performance to some extent, but this is not the best option, because the delay of the computing application is mainly composed of two parts, namely the network transmission delay and the computing processing time, and researchers usually separate the computing resources from the network resources to consider separately, which may cause some problems, for example, when the computing resources meet the requirement of the application computation, because the network resource situation is not considered, the network bottleneck may occur due to congestion of the network nearby the distributed computing node, which may result in that the overall performance requirement of the application cannot be met, and it is difficult to achieve the quality of service QoS.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for allocating and acquiring resources of multiple types of service applications, so as to solve the problems that a network bottleneck may occur due to congestion in the existing resource allocation method, the overall performance requirement of the application cannot be met, and the QoS is difficult to achieve.
According to a first aspect, an embodiment of the present invention provides a resource allocation method for multi-type service applications, where the resource allocation method includes: acquiring resource request information of a computing task and network resource information in current networking; extracting the calculation data volume and the delay tolerance of the calculation application from the resource request information; sending a computing resource information request to a data center based on the computing data volume and the delay tolerance; acquiring a calculation resource information result responded by the data center; obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information; and sending the calculation resource allocation result to an application manager corresponding to the calculation task, so that the application manager performs resource allocation based on the calculation resource allocation result.
With reference to the first aspect, in a first implementation manner of the first aspect, the data center includes: the method for sending the computing resource information request to the data center based on the computing data volume and the delay tolerance comprises the following steps: calculating the network delay and the calculation processing delay of the calculation task in the current networking based on the calculation data volume and the delay tolerance; and respectively sending a computing resource information request to the edge data center and the cloud data center based on the network delay and the computing processing delay.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the obtaining, by the network resource information, a calculation resource allocation result based on the calculation resource information result and the network resource information includes: selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain node groups; sorting the conditions of available computing resources by computing node load values, traversing each node of the node group, performing routing based on each node, and selecting a reachable path with the minimum hop count; calculating a weight value of each reachable path according to the link load condition; and determining the finally distributed computing nodes and routes according to the sequence obtained by the weight values of the reachable paths and the load values of the computing nodes.
With reference to the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information includes: performing computing resource allocation based on the link load condition, the network delay, the computing processing delay and the computing data volume to respectively obtain edge computing resources and cloud computing resources; and selecting a route and a computing node based on the edge computing resource, the cloud computing resource and the topology information respectively, and generating an edge computing resource allocation result and a cloud computing resource allocation result.
According to a second aspect, an embodiment of the present invention provides a task processing method for a multi-type service application, where the task processing method includes: sending resource request information of the computing task to a resource allocation processor; acquiring a calculation resource allocation result fed back by the resource allocation processor in response to the resource request information; the calculation resource allocation result is obtained by the resource allocation processor executing the resource allocation method for the multi-type service application according to the first aspect or any embodiment in the first aspect; and distributing the computing tasks to corresponding routes and computing nodes for computing based on the computing resource distribution result.
According to a third aspect, an embodiment of the present invention provides a resource allocation apparatus for multi-type service applications, where the resource allocation apparatus includes: the system comprises a joint resource scheduler, an application request processor, a computing resource processor and a network state collector, wherein the network state collector is used for acquiring network resource information in the current networking; the application request processor is used for acquiring resource request information of a computing task and extracting the computing data volume and the delay tolerance of the computing application from the resource request information; the computing resource processor is used for sending a computing resource information request to a data center based on the computing data volume and the delay tolerance and acquiring a computing resource information result responded by the data center; the computing resource processor is also used for obtaining a computing resource distribution result based on the computing resource information result and the network resource information; and the joint resource scheduler is used for sending the calculation resource allocation result to an application manager corresponding to the calculation task so as to enable the application manager to allocate resources based on the calculation resource allocation result.
With reference to the third aspect, in a first embodiment of the third aspect, the data center includes: the edge data center and the cloud data center, the computing resource processor comprises: the time delay calculation module is used for calculating the network time delay and the calculation processing time delay of the calculation task in the current networking mode based on the calculation data volume and the time delay tolerance; a computing resource information request sending module, configured to send computing resource information requests to the edge data center and the cloud data center, respectively, based on the network delay and the computing processing delay; and the computing resource information result acquisition module is used for acquiring the computing resource information result responded by the data center.
With reference to the first implementation manner of the third aspect, in a second implementation manner of the third aspect, the network resource information includes topology information and link load conditions of the current networking, and the computing resource processor further includes: the node group determination module is used for selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain node groups; the path generation module is used for sorting the conditions of available computing resources by computing node load values, traversing each node of the node group, carrying out routing based on each node and selecting a reachable path with the minimum hop count; a weight value calculating module, configured to calculate a weight value of each reachable path according to the link load condition; and the calculation node determination module is used for determining the finally distributed calculation nodes and routes according to the weighted values of the reachable paths and the sequence obtained by the load values of the calculation nodes.
With reference to the first implementation manner of the third aspect, in the third implementation manner of the third aspect, the network resource information includes topology information and link load conditions of the current networking, and the computing resource processor includes: the computing resource determining module is used for allocating computing resources based on the link load condition, the network delay, the computing processing delay and the computing data amount to respectively obtain edge computing resources and cloud computing resources; and the computing resource allocation module is used for selecting a route and a computing node respectively based on the edge computing resource, the cloud computing resource and the topology information, and generating an edge computing resource allocation result and a cloud computing resource allocation result.
According to a fourth aspect, an embodiment of the present invention provides a task processing device for a multi-type service application, where the task processing device includes: an application manager and a resource allocation apparatus for multi-type service application as described in any implementation manner of the third aspect, wherein the application manager is configured to perform the following steps: sending resource request information of a computing task to the resource allocation device; acquiring a calculation resource allocation result fed back by the resource allocation device in response to the resource request information; and distributing the computing tasks to corresponding routes and computing nodes for computing based on the computing resource distribution result.
According to a fifth aspect, an embodiment of the present invention provides an electronic device/mobile terminal/server, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing therein computer instructions, and the processor executing the computer instructions to perform the method for allocating resources of the multi-type service application according to the first aspect or any one of the embodiments of the first aspect, or to perform the method for processing tasks of the multi-type service application according to the second aspect.
According to a sixth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the resource allocation method of the multi-type service application described in the first aspect or any one of the embodiments of the first aspect, or execute the task processing method of the multi-type service application described in the second aspect.
The embodiment of the invention has the advantages that the available computing resources and network resources are comprehensively considered, and the corresponding computing resources are distributed by specifically referring to the actual computing requirements of the computing tasks, so that the requirements of the delay tolerant tasks can be met, and the service quality of the delay sensitive tasks can be ensured.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
FIG. 1 is a diagram illustrating an exemplary application scenario of an embodiment of the present invention;
fig. 2 is a schematic structural diagram illustrating a resource allocation apparatus for a multi-type service application according to an embodiment of the present invention;
FIG. 3 shows a schematic diagram of an organizer according to an embodiment of the invention;
FIG. 4 is a diagram illustrating a joint scheduling flow implemented by a task processing device for multi-type service applications according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a resource allocation method for multi-type service applications according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a task processing method for a multi-type business application according to an embodiment of the present invention;
fig. 7 shows a hardware configuration diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In conventional data center networks, network resources, IT resources, and management instances of computing applications do not cooperate with one another, resulting in some isolation between computing and network resource scheduling. In order to guarantee diversified service performance in the edge computing and cloud computing scenes, the scheme provided by the embodiment of the invention considers that the joint resource scheduling is carried out on the current service demand and the actual conditions of network resources and computing resources, and optimizes the communication and the computing simultaneously to further improve the application performance.
Fig. 1 is a schematic diagram of a specific application scenario of the solution of the embodiment of the present invention, in which typical network architectures of edge computing and cloud computing are shown. On the underlying data plane, the edge DCs are interconnected by SDN switches in their vicinity, while each edge DC may communicate with a cloud DC over an optical transport network. A large number of end users or internet of things end devices can transmit their own computing tasks to a nearby edge DC to request resources to complete their computing work. On a control plane, a centralized control mode is adopted based on an SDN network architecture, an SDN controller and an SDN switch are connected and configured through an OpenFlow protocol, and agent agents are used for conversion of optical switching equipment which does not support the SDN function temporarily. The application manager is used for receiving a computing request of a user, and the IT resource manager is used for managing computing resources of computing nodes in the current networking. The orchestrator (inoo) can establish connection communication with the application manager, the network controller, and the IT resource manager, and can acquire application request information, network state information of the current network, and computing resource information, thereby performing a joint resource scheduling according to these information.
It can be seen that, in this embodiment, what plays a main role in resource allocation is the orchestrator 7, and an embodiment of the present invention provides a resource allocation apparatus for multi-type service applications, that is, an apparatus for implementing the function of the orchestrator 7, as shown in fig. 2, the resource allocation apparatus mainly includes: a joint resource scheduler 21, an application request handler 22, a compute resource handler 23, a network state collector 24, etc.
Wherein, the network status collector 24 is configured to obtain network resource information in the current networking; the network resource information mainly comprises topology information of the current networking, link load conditions and the like;
the application request processor 22 is configured to obtain resource request information of a computing task, and extract a computing data amount and a delay tolerance of the computing application from the resource request information. The computing task refers to a specific computing task that will be brought about by a certain application or service to be implemented. In this embodiment, the calculation task may be different calculation tasks corresponding to multiple types of service application scenarios, for example, a calculation task of real-time navigation in a navigation scenario in automatic driving, a calculation task of offline analysis for driving data, and the like. The automatic driving scenario described herein is merely illustrative of one of the application scenarios and is not intended to limit the present invention.
The delay tolerance of the computing task is usually set to different standards for different computing tasks. For example, in an autonomous driving scenario, driving control requires ultra-fast data processing and transmission within tens of milliseconds; the navigation needs to be updated within a range from a few seconds to a few minutes; while data such as driving history can be analyzed off-line without time limitation.
The computing resource processor 23 is configured to send a computing resource information request to the data center based on the computing data amount and the delay tolerance, and obtain a computing resource information result responded by the data center. After the calculation data volume and the delay tolerance of the current calculation task are obtained, the allocation of the calculation resources can be carried out according to the calculation data volume and the delay tolerance. Specifically, the corresponding computing resource request is sent to the corresponding data center, and the data center performs feedback. The data center feeds back the corresponding calculation resource information result of the calculation resource processor 23 according to the calculation data amount and the delay tolerance of the current calculation task and the available calculation resource amount inside the data center.
The computing resource handler 23 is further configured to obtain a computing resource allocation result based on the computing resource information result and the network resource information. After obtaining the result of the computing resource information fed back by the data center, the computing resource processor 23 allocates computing resources according to the result and the network resource information, and sends the result of the allocation of computing resources to the application manager or the terminal device or the like that initiates the voluntary request of the computing task. After the application manager or the terminal device obtains the calculation resource allocation result, resource allocation can be performed according to the calculation resource allocation result.
The joint resource scheduler 21 is configured to send the calculation resource allocation result to an application manager corresponding to the calculation task, so that the application manager performs resource allocation based on the calculation resource allocation result.
According to the resource allocation device for the multi-type service application, the available computing resources and network resources are comprehensively considered, the corresponding computing resources are allocated specifically according to the actual computing requirements of the computing tasks, the requirements of the delay tolerant tasks can be met, and the service quality of the delay sensitive tasks can be guaranteed.
Optionally, in some embodiments of the present invention, the data center includes: the edge data center and the cloud data center, the computing resource processor 23 specifically includes:
the time delay calculation module is used for calculating the network time delay and the calculation processing time delay of the calculation task in the current networking based on the calculation data volume and the time delay tolerance; specifically, a delay estimator may be disposed in the computing resource process to implement the function of the delay computing module, where the network delay is mainly estimated by a ratio of a size of a data volume of the transmission process to a bandwidth, and the computing processing delay is estimated according to an empirical value, for example, performing multiple computing task processes stores some empirical values as known information.
A computing resource information request sending module, configured to send computing resource information requests to the edge data center and the cloud data center, respectively, based on the network delay and the computing processing delay; the network delay and the computing processing delay obtained by computing have clear requirements on computing resources, and at this time, specific computing resource requests for the edge DC and the cloud DC can be determined through the computing resource information request, and the corresponding computing resource requests are respectively sent to the edge DC and the cloud DC.
And the computing resource information result acquiring module is used for acquiring the computing resource information result responded by the data center. After the edge DC and the cloud DC obtain the computing resource information request, according to the computing resource margin of the edge DC and the cloud DC, and the network delay and the requirement of the computing processing delay fed back in the request, the computing resource information result obtaining module feeds back the corresponding computing resource information result to the computing resource processor 23.
Optionally, in some embodiments of the present invention, the computing resource handler 23 further comprises:
the node group determination module is used for selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain a node group G;
the path generation module is used for sequencing the conditions of available computing resources by computing node load values, traversing each node of the node group G, carrying out route selection based on each node, and selecting a reachable path P with the minimum hop count;
a weight value calculating module, configured to calculate a weight value of each reachable path P according to the link load condition;
and the calculation node determination module is used for determining the finally distributed calculation nodes and routes according to the weighted values of the reachable paths P and the sequence obtained by the load values of the calculation nodes.
Optionally, in some embodiments of the present invention, the computing resource handler 23 further comprises:
the computing resource determining module is used for allocating computing resources based on the link load condition, the network delay, the computing processing delay and the computing data amount to respectively obtain edge computing resources and cloud computing resources; after the computing resource information fed back by the edge DC and the cloud DC is obtained, resource allocation is respectively carried out on the computing resources of the edge DC and the cloud DC. Specifically, the network resources and the computing resources are considered comprehensively, and the computing resources are allocated according to the link load condition, the network delay, the computing processing delay and the computing data amount, so that the computing resources of the edge DC and the cloud DC are obtained.
And the computing resource allocation module is used for selecting a route and a computing node respectively based on the edge computing resource, the cloud computing resource and the topology information, and generating an edge computing resource allocation result and a cloud computing resource allocation result. And obtaining edge computing resources and cloud computing resources based on the calculation, and performing resource allocation according to the topology information in the network information, and the routing and computing node information in the network information, so as to obtain an edge computing resource allocation result and a cloud computing resource allocation result for the edge DC.
In practical applications, the resource allocation apparatus for multi-type service applications of this embodiment may be implemented by an orchestrator, where the structure of the orchestrator is shown in fig. 3, and the joint resource scheduler JRS finds a solution that can meet the delay requirement of the computation task according to the current network resource and the computation resource after receiving the computation requirement of the application. The solution includes the location of the target compute node (i.e., the specific DC allocated) and the corresponding network resource allocation such as routing, bandwidth, and the effectiveness of the solution is evaluated depending on whether the latency requirements of the application are met. The application request handler ARH will receive messages from the application manager, i.e. requests for computing applications, via the Compass protocol, mainly including the size of the data volume and the delay sensitivity (delay tolerance) of the computing task. The IT resource processor IRH can realize the request and distribution of IT resources, and can obtain the IT resource information (such as CPU, memory and disk information) in the current networking through the Sigar protocol. The network state collector NSC obtains the network state information in the current networking including the current topology and the remaining bandwidth through a northbound interface REST of the controller. And the PP module of the path selection distributor selects a route according to the topology information and the link load condition, thereby realizing the scheduling of network resources. In the specific scheduling process, corresponding computing resources and network resources can be respectively allocated to corresponding applications according to the characteristics of the types, computing requirements, priorities and the like of the applications, so that reasonable allocation is realized, and the performance of a plurality of parallel applications is improved.
The embodiment of the invention also provides a task processing device for multi-type service application, which comprises: an application manager, and a resource allocation apparatus for a multi-type service application, where the resource allocation apparatus for the multi-type service application is the resource allocation apparatus for the multi-type service application described in any of the above embodiments, and details are not repeated herein.
As shown in fig. 4, the overall flow of the joint scheduling of the overall computation and the network resource implemented by the task processing device of the multi-type service application includes:
firstly, an application manager initiates resource request information of computing application to an orchestrator INRO, wherein the resource request information comprises computing data volume, delay tolerance and the like, an application request processor ARH performs relevant processing on the application request information, and extracts the information of the computing data volume, the delay tolerance and the like.
And the IT resource processor of the orchestrator INRO makes an IT resource information request to the IT resource manager of the edge DC, and also makes an IT information request to the IT resource manager of the cloud DC.
And thirdly, the orchestrator INRO analyzes the received response, and updates the IT resource information in the database for subsequent joint scheduling.
And fourthly, the inside of the orchestrator INRO performs corresponding resource allocation for the computing application according to the application request information, the network resource state information and the IT resource processor, and returns the allocation result to the application manager initiating the request and the IT resource manager managing the IT computing resources.
After receiving the distributed resource information, the application manager prepares to start running the computing application on the designated computing node and route. Then, the distributed computing resources can be verified through preset training data. And sending the training data to the distributed edge DC or cloud data center, executing a computing task of corresponding application, and detecting the resource use condition of the edge DC and the cloud DC.
Returning the trained calculation result after the training is finished.
An embodiment of the present invention further provides a resource allocation method for multi-type service applications, as shown in fig. 5, the resource allocation method mainly includes:
step S501: acquiring resource request information of a computing task and network resource information in current networking; for details, reference may be made to the functional description implemented by the network status collector and the application request handler in the foregoing embodiments, and details are not described herein again.
Step S502: extracting the calculation data volume and the delay tolerance of the calculation application from the resource request information; for details, reference may be made to the functional description implemented by the application request processor in the foregoing embodiment, and details are not described herein again.
Step S503: sending a computing resource information request to a data center based on the computing data volume and the delay tolerance; for details, reference may be made to the functional description implemented by the computing resource processor in the above embodiments, and details are not repeated herein.
Step S504: acquiring a computing resource information result responded by the data center; for details, reference may be made to the functional description implemented by the computational resource processor in the foregoing embodiments, and details are not repeated herein.
Step S505: obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information; for details, reference may be made to the functional description implemented by the computing resource processor in the above embodiments, and details are not repeated herein.
Step S506: and sending the calculation resource allocation result to an application manager corresponding to the calculation task so as to allocate the resource to the application manager based on the calculation resource allocation result. For details, reference may be made to the functional description implemented by the joint resource scheduler in the above embodiments, and details are not described herein again.
According to the resource allocation method for the multi-type service application, the available computing resources and network resources are comprehensively considered, the corresponding computing resources are allocated specifically according to the actual computing requirements of the computing tasks, the requirements of the delay tolerant tasks can be met, and the service quality of the delay sensitive tasks can be guaranteed.
Optionally, in some embodiments of the invention, the data center comprises: the step S503 is a process of sending a computing resource information request to a data center based on the computing data amount and the delay tolerance, and mainly includes: calculating the network delay and the calculation processing delay of the calculation task in the current networking based on the calculation data volume and the delay tolerance; and respectively sending a computing resource information request to the edge data center and the cloud data center based on the network delay and the computing processing delay. For details, reference may be made to the detailed description of the computing resource handler in the above embodiments, and details are not repeated herein.
Optionally, in some embodiments of the present invention, the network resource information includes topology information of the current networking and a link load condition, and the step S505 of obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information mainly includes: selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain node groups; sorting the conditions of available computing resources by computing node load values, traversing each node of the node group, performing routing based on each node, and selecting a reachable path with the minimum hop count; calculating a weight value of each reachable path according to the link load condition; and determining the finally distributed computing nodes and routes according to the sequence obtained by the weight values of the reachable paths and the load values of the computing nodes. For details, reference may be made to the detailed description of the computing resource handler in the above embodiments, and details are not repeated herein.
Optionally, in some embodiments of the present invention, the network resource information includes topology information of the current networking and a link load condition, and the step S505 of obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information mainly includes: performing computing resource allocation based on the link load condition, the network delay, the computing processing delay and the computing data volume to respectively obtain edge computing resources and cloud computing resources; and selecting a route and a computing node based on the edge computing resource, the cloud computing resource and the topology information respectively, and generating an edge computing resource allocation result and a cloud computing resource allocation result. For details, reference may be made to the detailed description of the computational resource processor in the above embodiments, and details are not repeated herein.
An embodiment of the present invention further provides a task processing method for multi-type service applications, as shown in fig. 6, the task processing method includes:
step S601: sending resource request information of the computing task to a resource allocation processor;
step S602: acquiring a calculation resource allocation result fed back by the resource allocation processor in response to the resource request information; the calculation resource allocation result is obtained by the resource allocation processor executing the resource allocation method for the multi-type service application described in any of the above embodiments;
step S603: and distributing the computing tasks to corresponding routes and computing nodes for computing based on the computing resource distribution result. For details, reference may be made to the implementation process of the task processing device for multi-type service applications in the foregoing embodiments, and details are not described herein again.
An embodiment of the present invention further provides a computer device, as shown in fig. 7, the computer device may include a processor 71 and a memory 72, where the processor 71 and the memory 72 may be connected by a bus or in another manner, and fig. 7 illustrates an example of a connection by a bus.
The processor 71 may be a Central Processing Unit (CPU). The Processor 71 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 72, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the resource allocation method of the multi-type business application or the task processing method of the multi-type business application in the embodiments of the present invention. The processor 71 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 72, that is, a resource allocation method of a multi-type service application or a task processing method of the multi-type service application in the above-described method embodiment.
The memory 72 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 71, and the like. Further, the memory 72 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 72 may optionally include memory located remotely from the processor 71, and such remote memory may be connected to the processor 71 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 72 and, when executed by the processor 71, perform a resource allocation method of a multi-type service application or a task processing method of a multi-type service application as in the embodiments shown in fig. 5-6.
The details of the computer device can be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 5 to 6, which are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A resource allocation method for multi-type service application is characterized in that the resource allocation method comprises the following steps:
acquiring resource request information of a computing task and network resource information in current networking;
extracting the calculation data volume and the delay tolerance of the calculation application from the resource request information;
sending a computing resource information request to a data center based on the computing data volume and the delay tolerance;
acquiring a calculation resource information result responded by the data center;
obtaining a calculation resource allocation result based on the calculation resource information result and the network resource information;
and sending the calculation resource allocation result to an application manager corresponding to the calculation task, so that the application manager performs resource allocation based on the calculation resource allocation result.
2. The method of claim 1, wherein the data center comprises: the method for sending the computing resource information request to the data center based on the computing data volume and the delay tolerance comprises the following steps:
calculating the network delay and the calculation processing delay of the calculation task in the current networking based on the calculation data volume and the delay tolerance;
and respectively sending a computing resource information request to the edge data center and the cloud data center based on the network delay and the computing processing delay.
3. The method of claim 2, wherein the network resource information includes topology information of the current network, link load condition,
obtaining a computing resource allocation result based on the computing resource information result and the network resource information, including:
selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain node groups;
sorting the conditions of available computing resources by computing node load values, traversing each node of the node group, performing routing based on each node, and selecting a reachable path with the minimum hop count;
calculating a weight value of each reachable path according to the link load condition;
and determining the finally distributed computing nodes and routes according to the sequence obtained by the weight values of the reachable paths and the load values of the computing nodes.
4. The method of claim 2, wherein the network resource information includes topology information of the current network, link load condition,
the obtaining a computing resource allocation result based on the computing resource information result and the network resource information includes:
performing computing resource allocation based on the link load condition, the network delay, the computing processing delay and the computing data volume to respectively obtain edge computing resources and cloud computing resources;
and selecting a route and a computing node based on the edge computing resource, the cloud computing resource and the topology information respectively, and generating an edge computing resource allocation result and a cloud computing resource allocation result.
5. A task processing method for multi-type service application is characterized in that the task processing method comprises the following steps:
sending resource request information of the computing task to a resource allocation processor;
acquiring a calculation resource allocation result fed back by the resource allocation processor in response to the resource request information; the result of computing resource allocation is obtained by the resource allocation processor executing the resource allocation method of multi-type service application according to any one of claims 1-4;
and distributing the computing tasks to corresponding routes and computing nodes for computing based on the computing resource distribution result.
6. A resource allocation apparatus for multi-type service applications, the resource allocation apparatus comprising: a joint resource scheduler, an application request handler, a compute resource handler, a network state collector, wherein,
the network state collector is used for acquiring network resource information in the current networking;
the application request processor is used for acquiring resource request information of a computing task and extracting the computing data volume and the delay tolerance of the computing application from the resource request information;
the computing resource processor is used for sending a computing resource information request to a data center based on the computing data volume and the delay tolerance and acquiring a computing resource information result responded by the data center;
the computing resource processor is also used for obtaining a computing resource distribution result based on the computing resource information result and the network resource information;
and the joint resource scheduler is used for sending the calculation resource allocation result to an application manager corresponding to the calculation task so as to enable the application manager to allocate resources based on the calculation resource allocation result.
7. The apparatus for allocating resources of a multi-type service application according to claim 6, wherein the data center comprises: the edge data center and the cloud data center, the computing resource processor comprises:
the time delay calculation module is used for calculating the network time delay and the calculation processing time delay of the calculation task in the current networking based on the calculation data volume and the time delay tolerance;
a computing resource information request sending module, configured to send computing resource information requests to the edge data center and the cloud data center, respectively, based on the network delay and the computing processing delay;
and the computing resource information result acquisition module is used for acquiring the computing resource information result responded by the data center.
8. The apparatus for resource allocation of multi-type service application according to claim 7, wherein the network resource information includes topology information of the current network, link load condition,
the computing resource processor further comprises:
the node group determination module is used for selecting nodes meeting the computing requirements of the computing tasks from the available computing nodes in the current networking to obtain node groups;
the path generation module is used for sequencing the conditions of available computing resources by computing node load values, traversing each node of the node group, carrying out route selection based on each node and selecting a reachable path with the minimum hop count;
a weighted value calculating module, configured to calculate a weighted value of each reachable path according to the link load condition;
and the calculation node determination module is used for determining the finally distributed calculation nodes and routes according to the weighted values of the reachable paths and the sequence obtained by the load values of the calculation nodes.
9. The apparatus for resource allocation of multi-type service application according to claim 7, wherein the network resource information includes topology information of the current network, link load condition,
the compute resource processor includes:
the computing resource determining module is used for allocating computing resources based on the link load condition, the network delay, the computing processing delay and the computing data amount to respectively obtain edge computing resources and cloud computing resources;
and the computing resource allocation module is used for selecting a route and a computing node respectively based on the edge computing resource, the cloud computing resource and the topology information, and generating an edge computing resource allocation result and a cloud computing resource allocation result.
10. A task processing device for multi-type business applications, the task processing device comprising: an application manager, and the resource allocation apparatus of the multi-type service application according to any one of claims 6 to 9,
the application manager is configured to perform the steps of:
sending resource request information of a computing task to the resource allocation device;
acquiring a calculation resource allocation result fed back by the resource allocation device in response to the resource request information;
and distributing the computing tasks to corresponding routes and computing nodes for computing based on the computing resource distribution result.
CN202111613817.2A 2021-12-27 2021-12-27 Resource allocation and acquisition method and device for multi-type service application Pending CN114500405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111613817.2A CN114500405A (en) 2021-12-27 2021-12-27 Resource allocation and acquisition method and device for multi-type service application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111613817.2A CN114500405A (en) 2021-12-27 2021-12-27 Resource allocation and acquisition method and device for multi-type service application

Publications (1)

Publication Number Publication Date
CN114500405A true CN114500405A (en) 2022-05-13

Family

ID=81496243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111613817.2A Pending CN114500405A (en) 2021-12-27 2021-12-27 Resource allocation and acquisition method and device for multi-type service application

Country Status (1)

Country Link
CN (1) CN114500405A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002810A (en) * 2022-08-01 2022-09-02 阿里巴巴达摩院(杭州)科技有限公司 Resource configuration method, private network control method, edge cloud server and equipment
CN115134368A (en) * 2022-08-31 2022-09-30 中信建投证券股份有限公司 Load balancing method, device, equipment and storage medium
CN115174681A (en) * 2022-06-14 2022-10-11 武汉大学 Method, equipment and storage medium for scheduling edge computing service request
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426542A (en) * 2011-10-28 2012-04-25 中国科学院计算技术研究所 Resource management system for data center and operation calling method
CN110149646A (en) * 2019-04-10 2019-08-20 中国电力科学研究院有限公司 A kind of smart grid method for managing resource and system based on time delay and handling capacity
US20200236038A1 (en) * 2019-01-18 2020-07-23 Rise Research Institutes of Sweden AB Dynamic Deployment of Network Applications Having Performance and Reliability Guarantees in Large Computing Networks
CN112231085A (en) * 2020-10-21 2021-01-15 中国电子科技集团公司第二十八研究所 Mobile terminal task migration method based on time perception in collaborative environment
CN113535390A (en) * 2021-06-28 2021-10-22 山东师范大学 Method, system, device and medium for distributing multi-access edge computing node resources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426542A (en) * 2011-10-28 2012-04-25 中国科学院计算技术研究所 Resource management system for data center and operation calling method
US20200236038A1 (en) * 2019-01-18 2020-07-23 Rise Research Institutes of Sweden AB Dynamic Deployment of Network Applications Having Performance and Reliability Guarantees in Large Computing Networks
CN110149646A (en) * 2019-04-10 2019-08-20 中国电力科学研究院有限公司 A kind of smart grid method for managing resource and system based on time delay and handling capacity
CN112231085A (en) * 2020-10-21 2021-01-15 中国电子科技集团公司第二十八研究所 Mobile terminal task migration method based on time perception in collaborative environment
CN113535390A (en) * 2021-06-28 2021-10-22 山东师范大学 Method, system, device and medium for distributing multi-access edge computing node resources

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LANJING CHEN, JINTAO ZHOU, ZHIYONG CHEN,NING LIU,MEIXIA TAO: "Resource Allocation for Deterministic Delay in Wireless Control Networks", IEEE WIRELESS COMMUNICATIONS LETTERS, 28 June 2021 (2021-06-28) *
姜栋瀚;林海涛;: "云计算环境下的资源分配关键技术研究综述", 中国电子科学研究院学报, no. 03, 20 June 2018 (2018-06-20) *
黄晓舸,崔艺凡,张东宇,陈前斌: "基于MEC的任务卸载和资源分配联合优化方案", ***工程与电子技术, 4 March 2020 (2020-03-04) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174681A (en) * 2022-06-14 2022-10-11 武汉大学 Method, equipment and storage medium for scheduling edge computing service request
CN115174681B (en) * 2022-06-14 2023-12-15 武汉大学 Method, equipment and storage medium for scheduling edge computing service request
CN115002810A (en) * 2022-08-01 2022-09-02 阿里巴巴达摩院(杭州)科技有限公司 Resource configuration method, private network control method, edge cloud server and equipment
CN115002810B (en) * 2022-08-01 2023-01-13 阿里巴巴达摩院(杭州)科技有限公司 Resource configuration method, private network control method, edge cloud server and equipment
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network
CN115396358B (en) * 2022-08-23 2023-06-06 中国联合网络通信集团有限公司 Route setting method, device and storage medium of computing power perception network
CN115134368A (en) * 2022-08-31 2022-09-30 中信建投证券股份有限公司 Load balancing method, device, equipment and storage medium
CN115134368B (en) * 2022-08-31 2022-11-25 中信建投证券股份有限公司 Load balancing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114500405A (en) Resource allocation and acquisition method and device for multi-type service application
CN112751826B (en) Method and device for forwarding flow of computing force application
CN109669768B (en) Resource allocation and task scheduling method for edge cloud combined architecture
CN113448721A (en) Network system for computing power processing and computing power processing method
CN114095577A (en) Resource request method and device, calculation network element node and calculation application equipment
CN113709048A (en) Routing information sending and receiving method, network element and node equipment
US11196667B2 (en) Path computation method, message responding method, and related device
CN107231662A (en) The method and apparatus of multiple stream transmission in a kind of SDN
CN107317707B (en) SDN network topology management method based on point coverage set
Gong et al. A fuzzy delay-bandwidth guaranteed routing algorithm for video conferencing services over SDN networks
Dong et al. Distributed mechanism for computation offloading task routing in mobile edge cloud network
CN109922161B (en) Content distribution method, system, device and medium for dynamic cloud content distribution network
CN114500354A (en) Switch control method, device, control equipment and storage medium
CN112714146B (en) Resource scheduling method, device, equipment and computer readable storage medium
US20150372895A1 (en) Proactive Change of Communication Models
CN113810442A (en) Resource reservation method, device, terminal and node equipment
CN109474523B (en) Networking method and system based on SDN
US20230275807A1 (en) Data processing method and device
CN110955504A (en) Method, server, system and storage medium for intelligently distributing rendering tasks
CN113852554B (en) Data transmission method, device and equipment
CN115665262A (en) Request processing method and device, electronic equipment and storage medium
Nasim et al. Mobile publish/subscribe system for intelligent transport systems over a cloud environment
CN114501374A (en) Dynamic service deployment method, system, device and storage medium for Internet of vehicles
CN109450809B (en) Data center scheduling system and method
Wu et al. Multi-Objective Provisioning of Network Slices using Deep Reinforcement Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination