CN116302469A - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN116302469A
CN116302469A CN202211632581.1A CN202211632581A CN116302469A CN 116302469 A CN116302469 A CN 116302469A CN 202211632581 A CN202211632581 A CN 202211632581A CN 116302469 A CN116302469 A CN 116302469A
Authority
CN
China
Prior art keywords
computing
task
node
distributed computing
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211632581.1A
Other languages
Chinese (zh)
Inventor
杨劲武
阳熙
卿语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211632581.1A priority Critical patent/CN116302469A/en
Publication of CN116302469A publication Critical patent/CN116302469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a method and a device for processing tasks, wherein the method comprises the following steps: constructing a computing power network among the central computing power pool and the plurality of distributed computing power pools; the central computing force pools are provided with central computing nodes, and each distributed computing force pool is provided with distributed computing nodes; determining task demand information aiming at a computing task initiated by a user; determining a target computing node from the central computing node and a plurality of distributed computing nodes according to the task demand information, so as to call the target computing node to process the computing task through the computing network; the target computing node is used for calling the computing power of the computing power pool to perform computing. By the embodiment of the invention, the distributed computing power resources are applied by constructing the computing power network, the application flexibility and the utilization rate of the computing power resources are improved, the computing power scheduling is performed by the result task demands, and the application efficiency of the computing power resources is improved.

Description

Task processing method and device
Technical Field
The present invention relates to the field of cloud computing, and in particular, to a method and apparatus for task processing.
Background
In the cloud computing era, users tend to build a centralized super computing pool to solve the problem. In view of resource construction and occupation, most enterprise users are not large-scale internet enterprises with great financial resources, obviously do not have enough resource reserves, and in view of resource provision, all parties of information base resources are scattered, so that it is difficult for a party to occupy all the resources.
With the continuous change of business scenes, users in various industries need to comprehensively utilize resources in combination with the continuous change of demands, and in many new business scenes, the existing centralized computing power resources cannot be suitable.
Disclosure of Invention
In view of the above, a method and apparatus for task processing is presented that overcomes or at least partially solves the above-mentioned problems, including:
a method of task processing, the method comprising:
constructing a computing power network among the central computing power pool and the plurality of distributed computing power pools; the central computing force pools are provided with central computing nodes, and each distributed computing force pool is provided with distributed computing nodes;
determining task demand information aiming at a computing task initiated by a user;
Determining a target computing node from the central computing node and a plurality of distributed computing nodes according to the task demand information, so as to call the target computing node to process the computing task through the computing network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
Optionally, the task demand information includes demand confidence, and determining, according to the task demand information, a target computing node from the central computing node and a plurality of distributed computing nodes includes:
determining a target computing node from the plurality of distributed computing nodes when the required confidence coefficient is greater than a preset confidence coefficient;
and under the condition that the demand confidence coefficient is smaller than or equal to a preset confidence coefficient, determining the central computing node as a target computing node.
Optionally, the task demand information includes demand processing time, and determining, according to the task demand information, a target computing node from the central computing node and a plurality of distributed computing nodes includes:
determining a target computing node from the plurality of distributed computing nodes when the required processing time is less than a preset time;
And under the condition that the required processing time is greater than or equal to the preset time, determining the central computing node as a target computing node.
Optionally, the determining a target computing node from the plurality of distributed computing nodes includes:
predicting the time spent for a plurality of distributed computing nodes to be at the computing task;
determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time.
Optionally, the task demand information includes demand computing power, and the determining task demand information for the computing task initiated by the user includes:
aiming at a calculation task initiated by a user, carrying out calculation force quantification to obtain a demand calculation force;
the determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time, comprising:
determining remaining computing power of the plurality of distributed computing nodes;
from the plurality of distributed computing nodes, determining a target computing node for which the elapsed time is less than the demand processing time and the remaining computing power is greater than the demand computing power.
Optionally, the method further comprises:
and carrying out abnormal early warning on each distributed computing node and the distributed computing pool where each distributed computing node is located.
Optionally, the performing the abnormality early warning on each distributed computing node and the distributed computing pool includes:
aiming at each target distributed computing node and the distributed computing pool where the target distributed computing node is positioned, obtaining the probability of abnormal state occurrence and the probability of normal state occurrence in a sample set;
and under the condition that the probability of the abnormal state is larger than that of the normal state, carrying out abnormal early warning on the distributed computing nodes and the distributed computing power pool where the distributed computing nodes are positioned.
An apparatus for task processing, the apparatus comprising:
the computing power network construction module is used for constructing a computing power network between the central computing power pool and the plurality of distributed computing power pools; the central computing force pools are provided with central computing nodes, and each distributed computing force pool is provided with distributed computing nodes;
the task demand information determining module is used for determining task demand information aiming at a computing task initiated by a user;
the target computing node confirmation module is used for determining a target computing node from the central computing node and the distributed computing nodes according to the task demand information so as to call the target computing node to process the computing task through the computing network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, implements a method of task processing as described above.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of task processing as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a central computing node is deployed in the central computing pool by constructing a computing power network between the central computing pool and a plurality of distributed computing pools, each distributed computing pool is deployed with a distributed computing node, task demand information is determined for a computing task initiated by a user, and a target computing node is determined from the central computing node and the plurality of distributed computing nodes according to the task demand information so as to call the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to calculate, so that the distributed computing power resources are applied by constructing a computing power network, the application flexibility and the utilization rate of the computing power resources are improved, the computing power scheduling is performed by the result task demands, and the application efficiency of the computing power resources is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for task processing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture according to an embodiment of the present invention;
FIG. 3 is a flow chart of steps of another method for task processing according to an embodiment of the present invention;
FIG. 4 is a flow chart of steps of another method for task processing according to an embodiment of the present invention;
fig. 5 is a block diagram of an apparatus for task processing according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Computational effort, the computing resource, has become an important fundamental resource in the digital age. Currently, various computationally intensive services, such as AI services widely used in various industries, require a large amount of computational resources.
Under the condition that centralized computing power resources are difficult to apply, a flexible and feasible overall solution is provided by adopting a resource integration and utilization technology, so that a user can select the most suitable computing power node according to various factors such as service characteristics, price, network conditions and the like.
In the embodiment of the invention, the free computing power information is sent to the network control surface through the computing power network and the computing power pool, and then the computing power information is distributed through the network control surface. After receiving the business requirement of the user, the most suitable computing pool and network path can be selected by analyzing the network information and computing information recorded in the routing table. Obviously, the computing power network needs to select the network first and then select the computing power pool (cloud data center computing node or edge computing node), namely "one network and multiple clouds" are realized.
On the one hand, the total time delay of a computing task (namely, the time required for processing the computing task) is calculated through a model algorithm by creating a distributed computing network, a distributed computing pool and distributed computing nodes, wherein the total time delay comprises the transmission time delay from a user to an edge computing node (namely, the distributed computing node), the processing time delay of the edge node, the transmission time delay of the edge node and a cloud data center computing node (namely, the center computing node) and the computing time delay of the cloud computing node, namely, the total time delay of the computing task reaching the edge computing node in a system and the optimized average time delay are used for the user to select the network computing node to complete the computing task.
First, the network edge builds the initial destination network computing node (edge computing node) and computing pool at each place, so that the network edge is close to the user, and meanwhile, the pressure of the central computing node and computing pool can be greatly relieved through distributed deployment. And secondly, under the condition of meeting strict time delay requirements, collaborative scheduling is carried out on multiple types of resources such as computing resources, storage resources, transmission link communication resources and the like in the whole network, strict service quality guarantee is provided for all users, and optimal network resource allocation is realized.
The time delay is the most important index in the communication system, and the time delay performance index of the calculation task can be ensured to be optimized under the constraint conditions of limited resources, task priority and the like through reasonable distribution of network resources and calculation resources. The total time delay of the calculation task in the system is obtained through calculation by comprehensively considering the transmission time delay of the edge calculation node, the processing time delay of the edge node, the transmission time delay of the edge node and the cloud data center calculation node and the calculation time delay of the cloud calculation node, and the average time delay of the calculation task reaching the user selected calculation node or the default edge calculation node is obtained through average calculation.
On the other hand, by creating a calculation power enhancement model and enhancing how much calculation power is needed by calculation tasks sent by distributed calculation task nodes in various places, then selecting a calculation power pool of network calculation power nodes with abundant free calculation power for calculation, and selecting the nodes with abundant free calculation power for transmission of the calculation tasks to designated calculation power nodes or edge calculation nodes.
In still another aspect, early warning of network quality and equipment faults is performed on the edge network power calculation nodes and the power calculation pools through an early warning model, so that healthy operation of the distributed power calculation network is guaranteed.
In the embodiment of the invention, the delay of the transmission through the nodes can be reduced by considering the delay, a better line is selected, and the calculation task transmission borne by the calculation stress nodes can be avoided by calculation strength.
The following is further described with reference to the accompanying drawings:
referring to fig. 1, a flowchart illustrating steps of a method for task processing according to an embodiment of the present invention may specifically include the following steps:
step 101, constructing a computing force network among a central computing force pool and a plurality of distributed computing force pools; the central computing force pools are deployed with central computing nodes, and each distributed computing force pool is deployed with distributed computing nodes.
For the equipment with idle computing power resources, the idle computing power resources can be shared, and then the equipment can be concentrated in a distributed computing power pool, as shown in fig. 2, the computing power resources can be shared through a multi-cloud internet from an enterprise DC1 and an enterprise DC2 to an enterprise DCN, a disaster recovery center, a public cloud A, a public cloud B, a private cloud and the like. For the shared computing power resources, the computing power resources can be divided into a plurality of distributed computing power pools according to different positions of the computing power resources, the distributed computing power pools can be located at the edge of a network and used for edge computing, and the distributed computing power pools can be deployed with distributed computing nodes, namely edge computing nodes, which can be used for calling the computing power resources in the distributed computing power pools to carry out related computing.
In addition to the distributed computing force pools, the central computing force pool may be set by deploying its own equipment, and may be provided with central computing nodes, i.e. cloud data center computing nodes, which may be used to invoke computing force resources of the central computing force pool for relevant computation.
In order to facilitate the utilization of the computing power resources, for the central computing power pool and the plurality of distributed computing power pools, a computing power network can be constructed, the computing power network can record computing power information of each computing power pool, such as available computing power size, computing power type and the like, through a routing table, and can record network information among the computing power pools.
Step 102, determining task demand information for a computing task initiated by a user.
When a user has a computing requirement, the user can initiate a computing task, as shown in fig. 2, the enterprise DC1 and the enterprise DC2 to the enterprise DCN initiate the computing task to the network control plane, and then the task requirement information of the user can be determined according to the information carried by the computing task, namely, the computing power resource requirement of the computing task is processed.
As an example, the task demand information may include demand confidence, demand processing time.
The demand confidence is the credibility of the distributed computing node and the force pool thereof for processing the computing task, the demand confidence is determined according to the importance level of the computing task, and the higher the importance level is, the lower the demand confidence is for the computing task with the higher the demand confidence is.
The required processing time is the time required for processing the computing task, namely the time delay in the above, and the time delay required for different computing tasks is different, and can be carried in the computing task.
In an embodiment of the present invention, the task demand information may further include demand calculation force, and the determining task demand information for the computing task initiated by the user includes:
and carrying out calculation force quantification aiming at a calculation task initiated by a user to obtain the required calculation force.
In a specific implementation, how much computing power is required for computing tasks sent by distributed computing task nodes in each place is dynamically computed by creating a computing power model and by the computing power model.
Specifically, different calculation force requirements can be matched for different business scenes, and then unified quantization of calculation force is carried out, wherein unified quantization of calculation force is the basis of calculation force scheduling and use.
As an example, the demand for computing force can be divided into 3 categories: logic computing capability, parallel computing capability, and neural network acceleration capability.
Depending on the algorithm run and the type of data computation involved, the computing power can be divided into logic computing power, parallel computing power and neural network computing power.
For different computing types, chips of different manufacturers have different designs, which involves uniform measurement of heterogeneous computing forces, and computing forces provided by different chips can be mapped to uniform dimensions through measurement functions. For heterogeneous computing power equipment and platforms, the computing power demand can uniformly describe a computing power scheduling model under the assumption that n logic operation chips, m parallel computing chips and p neural network acceleration chips exist:
Figure SMS_1
wherein C is br Is the total calculation force requirement; f (x) is a mapping function; a. b and y are mapping proportionality coefficients; q is the redundant calculation power. Taking the parallel computing capability as an example, assuming that there are b1, b2, b3,3 different types of parallel computing chip resources, f (b) j ) Mapping function representing parallel computing power available to jth parallel computing chip b, q2 representing redundant computing power of parallel computing。
Step 103, determining a target computing node from the central computing node and a plurality of distributed computing nodes according to the task demand information, so as to call the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
After task demand information of a computing task is obtained, network information and computing power information recorded in a routing table of a computing power network can be combined according to the task demand information, a target computing node is determined from a central computing node and a plurality of distributed computing nodes, and then the computing power network is used for calling the target computing node to process the computing task, and computing power resources in a computing power pool can be called by the target computing node to perform computing.
In an embodiment of the present invention, the task demand information includes demand confidence, which is a degree of confidence of a distributed computing node and a force pool thereof for processing the computing task, which is determined according to an importance level of the computing task, and the higher the importance level is, the higher the demand confidence is, the lower the importance level is, and the lower the demand confidence is.
Accordingly, the determining, according to the task demand information, a target computing node from the central computing node and a plurality of distributed computing nodes includes:
determining a target computing node from the plurality of distributed computing nodes when the required confidence coefficient is greater than a preset confidence coefficient; and under the condition that the demand confidence coefficient is smaller than or equal to a preset confidence coefficient, determining the central computing node as a target computing node.
In particular implementations, many businesses prefer to place core data in their trusted and controllable areas because of concerns about the core secrets of the businesses being obtained by third parties. Therefore, enterprises are more aware of schemes that balance the forces between security, reliability, and cost, flexibility.
Based on the method, the computing resources are selected to be connected with the network according to the requirement, namely, important tasks with high timeliness requirements are placed in the computing pool of the edge network computing nodes of the credible places, and the tasks with low timeliness requirements are placed in the computing pool of the centralized and low-cost central computing node.
It can be seen that, due to the requirements of cost, time delay and the like, the completely centralized temporary calculation pool cannot meet all requirements, and the distributed calculation pools, such as edge calculation nodes, have obvious advantages in index aspect, have strong adaptability and accord with future business significance.
In an embodiment of the present invention, the task requirement information includes a requirement processing time, where the requirement processing time is a time required for processing the computing task, that is, the time delay above, and the time delays required for different computing tasks are different, and may be carried in the computing task.
Accordingly, the determining, according to the task demand information, a target computing node from the central computing node and a plurality of distributed computing nodes includes:
determining a target computing node from the plurality of distributed computing nodes when the required processing time is less than a preset time; and under the condition that the required processing time is greater than or equal to the preset time, determining the central computing node as a target computing node.
In an embodiment of the present invention, the determining a target computing node from the plurality of distributed computing nodes includes:
Predicting the time spent for a plurality of distributed computing nodes to be at the computing task; determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time.
In a specific implementation, the time consumption of processing the computing task by a plurality of distributed computing nodes in the computing power network, namely the time delay in the above description, can be predicted, and then a target computing node with the time consumption less than the required processing time, namely the node meeting the time delay requirement, can be selected from the plurality of distributed computing nodes.
Specifically, the total delay of one computing task (i.e., time-consuming) may include a transmission delay from a user to an edge computing node (i.e., a distributed computing node), a processing delay of the edge computing node, a transmission delay of the edge node and a cloud data center computing node (i.e., a center computing node), and a computing delay of the cloud computing node, i.e., the total delay of the computing task in the system is:
Figure SMS_2
the specific formulas of each link are as follows:
1. user access edge computing node transmission time
Figure SMS_3
Assuming that a user accesses an edge computing node through communication transmission of a wireless channel, setting the data transmission bandwidth of the wireless communication link as B, and knowing by shannon theorem that under a channel environment with limited bandwidth and noise interference, the transmission delay from the user i to the edge computing node is as follows:
Figure SMS_4
Wherein p is i′ Is the transmission power of the i-th device; h is a k,i The channel gain from the ith user terminal to the kth edge node is a random independent co-distributed variable; sigma (sigma) 2 Is the additive white gaussian noise power.
2. Edge node processing delay
Figure SMS_5
The user task can schedule the calculation task according to different demands on calculation and network resources, namely, one part of the calculation task is placed at an edge calculation node for calculation, and the other part of the calculation task is unloaded to a cloud data center for calculation. Lambda (lambda) i Representing the proportion of computing tasks of the ith user to their corresponding edge computing nodes, lambda i At [0,1 ]]Between the intervals, 1-lambda i Representing the proportion of computing tasks offloaded to the cloud data center.
By J k,i Representing the computational power resources allocated to user i by the kth edge computing node, the edge computing latency for task i is therefore:
Figure SMS_6
3. wide area network transmission delay from edge node to cloud data center
Figure SMS_7
Assume that the cloud data center provides bandwidth W for task i of the kth edge node k,i (bit/s) connection service, then the transmission delay of the edge node to the cloud data center can be expressed as
Figure SMS_8
4. The computation delay of the cloud data center,
Figure SMS_9
data center allocation f i c For computing the computing task of the ith user, the computing latency of the cloud data center is expressed as
Figure SMS_10
In the wireless time domain, the time delay is one of important characteristics for measuring the performance of the system, and the time delay characteristic of the system can be measured through the sum of the task queue lengths of all the sections in the cloud, the network, the edge and the end. Considering the dynamic queue characteristics of edge nodes and cloud data center nodes, the average delay of the system can be expressed as
Figure SMS_11
Wherein S is k (t)) is time-of-day offloaded to cloud data center suitComputing task queue, Q of server side k (t) edge computing task queues present on the nodes, t representing the t-th decision time.
In an embodiment of the present invention, the determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time includes:
determining remaining computing power of the plurality of distributed computing nodes; from the plurality of distributed computing nodes, determining a target computing node for which the elapsed time is less than the demand processing time and the remaining computing power is greater than the demand computing power.
Since the above has described the quantization of the demand computing force of the computing task, the remaining computing force of the plurality of distributed computing nodes can be determined, and then the target computing node whose consumed time is less than the demand processing time and whose remaining computing force is greater than the demand computing force can be determined from the plurality of distributed computing nodes, so that the delay of passing through the nodes can be reduced, a better line can be selected, and the computation task transmission can be avoided from being born by the computing force tension node.
In an embodiment of the present invention, further includes:
and carrying out abnormal early warning on each distributed computing node and the distributed computing pool where each distributed computing node is located.
In a specific implementation, the health condition of the main monitoring index of the designated power calculation node or the edge network power calculation node can be predicted by creating an early warning model and through the prediction result of the early warning model.
And meanwhile, the network path nodes of the task transmitted to the edge network computing nodes are subjected to main monitoring index health prediction, so that early warning of health conditions of all links between the task and computing is ensured.
In an embodiment of the present invention, the performing anomaly early warning on each distributed computing node and the distributed computing pool includes:
aiming at each target distributed computing node and the distributed computing pool where the target distributed computing node is positioned, obtaining the probability of abnormal state occurrence and the probability of normal state occurrence in a sample set; and under the condition that the probability of the abnormal state is larger than that of the normal state, carrying out abnormal early warning on the distributed computing nodes and the distributed computing power pool where the distributed computing nodes are positioned.
In a specific implementation, there are two states, one is an abnormal state R 1 The other is the normal state R 2 The extracted characteristic value of the monitoring index is expressed as:
T={t 1 ,t 2 ,...,t 12 Monitor indexes such as network delay, packet loss, memory utilization, cpu utilization, disk occupancy, etc.
According to the conditional probability formula, under the condition that the characteristic value T is met, the probability that the suspected abnormality is judged to be the abnormality R1 is
Figure SMS_12
Wherein P (T|R) 1 ) Representing the probability of occurrence of a characteristic value T in an abnormal subset of the sample set, P (R 1 ) Representing the specific gravity of the abnormal subset in the whole sample set.
Similarly, the probability that the suspected abnormality is judged to be normal R2 under the condition that the characteristic value T appears is expressed as:
Figure SMS_13
wherein P (T|R) 2 ) Representing the probability of occurrence of the characteristic value T in a normal subset of the sample set, P (R 2 ) Representing the specific gravity of the normal subset in the whole sample set.
Due to Bayesian concept, each eigenvalue t i Are mutually independent, and the suspected abnormality is judged as R according to the calculation of the probability of the relatively independent event 1 And R is R 2 After comparing the two probabilities, if P (R 1 |TT)>P(R 2 T), the monitored index is considered abnormal; otherwise, it is normal.
In the embodiment of the invention, a central computing node is deployed in the central computing pool by constructing a computing power network between the central computing pool and a plurality of distributed computing pools, each distributed computing pool is deployed with a distributed computing node, task demand information is determined for a computing task initiated by a user, and a target computing node is determined from the central computing node and the plurality of distributed computing nodes according to the task demand information so as to call the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to calculate, so that the distributed computing power resources are applied by constructing a computing power network, the application flexibility and the utilization rate of the computing power resources are improved, the computing power scheduling is performed by the result task demands, and the application efficiency of the computing power resources is improved.
Referring to fig. 3, a flowchart illustrating steps of another method for task processing according to an embodiment of the present invention may specifically include the following steps:
step 301, constructing a computing force network among a central computing force pool and a plurality of distributed computing force pools; the central computing force pools are deployed with central computing nodes, and each distributed computing force pool is deployed with distributed computing nodes.
For the equipment with idle computing power resources, the idle computing power resources can be shared, and then the equipment can be concentrated in a distributed computing power pool, as shown in fig. 2, the computing power resources can be shared through a multi-cloud internet from an enterprise DC1 and an enterprise DC2 to an enterprise DCN, a disaster recovery center, a public cloud A, a public cloud B, a private cloud and the like. For the shared computing power resources, the computing power resources can be divided into a plurality of distributed computing power pools according to different positions of the computing power resources, the distributed computing power pools can be located at the edge of a network and used for edge computing, and the distributed computing power pools can be deployed with distributed computing nodes, namely edge computing nodes, which can be used for calling the computing power resources in the distributed computing power pools to carry out related computing.
In addition to the distributed computing force pools, the central computing force pool may be set by deploying its own equipment, and may be provided with central computing nodes, i.e. cloud data center computing nodes, which may be used to invoke computing force resources of the central computing force pool for relevant computation.
In order to facilitate the utilization of the computing power resources, for the central computing power pool and the plurality of distributed computing power pools, a computing power network can be constructed, the computing power network can record computing power information of each computing power pool, such as available computing power size, computing power type and the like, through a routing table, and can record network information among the computing power pools.
Step 302, determining task demand information for a computing task initiated by a user; wherein the task demand information includes demand processing time.
When a user has a computing requirement, the user can initiate a computing task, as shown in fig. 2, the enterprise DC1 and the enterprise DC2 to the enterprise DCN initiate the computing task to the network control plane, and then the task requirement information of the user can be determined according to the information carried by the computing task, namely, the computing power resource requirement of the computing task is processed.
As an example, the task demand information may include demand confidence, demand processing time.
The demand confidence is the credibility of the distributed computing node and the force pool thereof for processing the computing task, the demand confidence is determined according to the importance level of the computing task, and the higher the importance level is, the lower the demand confidence is for the computing task with the higher the demand confidence is.
The required processing time is the time required for processing the computing task, namely the time delay in the above, and the time delay required for different computing tasks is different, and can be carried in the computing task.
Step 303, determining a target computing node from the plurality of distributed computing nodes when the required processing time is less than a preset time, and determining the central computing node as the target computing node when the required processing time is greater than or equal to the preset time.
In a specific implementation, the time consumption of processing the computing task by a plurality of distributed computing nodes in the computing power network, namely the time delay in the above description, can be predicted, and then a target computing node with the time consumption less than the required processing time, namely the node meeting the time delay requirement, can be selected from the plurality of distributed computing nodes.
Step 304, calling the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
Referring to fig. 4, a flowchart illustrating steps of another method for task processing according to an embodiment of the present invention may specifically include the following steps:
Step 401, constructing a computing force network among a central computing force pool and a plurality of distributed computing force pools; the central computing force pools are deployed with central computing nodes, and each distributed computing force pool is deployed with distributed computing nodes.
For the equipment with idle computing power resources, the idle computing power resources can be shared, and then the equipment can be concentrated in a distributed computing power pool, as shown in fig. 2, the computing power resources can be shared through a multi-cloud internet from an enterprise DC1 and an enterprise DC2 to an enterprise DCN, a disaster recovery center, a public cloud A, a public cloud B, a private cloud and the like. For the shared computing power resources, the computing power resources can be divided into a plurality of distributed computing power pools according to different positions of the computing power resources, the distributed computing power pools can be located at the edge of a network and used for edge computing, and the distributed computing power pools can be deployed with distributed computing nodes, namely edge computing nodes, which can be used for calling the computing power resources in the distributed computing power pools to carry out related computing.
In addition to the distributed computing force pools, the central computing force pool may be set by deploying its own equipment, and may be provided with central computing nodes, i.e. cloud data center computing nodes, which may be used to invoke computing force resources of the central computing force pool for relevant computation.
In order to facilitate the utilization of the computing power resources, for the central computing power pool and the plurality of distributed computing power pools, a computing power network can be constructed, the computing power network can record computing power information of each computing power pool, such as available computing power size, computing power type and the like, through a routing table, and can record network information among the computing power pools.
Step 402, determining task demand information for a computing task initiated by a user; wherein the task demand information includes demand processing time and demand computing power.
When a user has a computing requirement, the user can initiate a computing task, as shown in fig. 2, the enterprise DC1 and the enterprise DC2 to the enterprise DCN initiate the computing task to the network control plane, and then the task requirement information of the user can be determined according to the information carried by the computing task, namely, the computing power resource requirement of the computing task is processed.
As an example, the task demand information may include demand confidence, demand processing time.
The demand confidence is the credibility of the distributed computing node and the force pool thereof for processing the computing task, the demand confidence is determined according to the importance level of the computing task, and the higher the importance level is, the lower the demand confidence is for the computing task with the higher the demand confidence is.
The required processing time is the time required for processing the computing task, namely the time delay in the above, and the time delay required for different computing tasks is different, and can be carried in the computing task.
In an embodiment of the present invention, the task demand information may further include demand force, where the task demand information is determined for the computing task initiated by the user
Step 403, predicting the time spent by the plurality of distributed computing nodes in the computing task and determining the remaining computing power of the plurality of distributed computing nodes when the required processing time is less than a preset time.
Step 404, determining, from the plurality of distributed computing nodes, a target computing node for which the elapsed time is less than the demand processing time and the remaining computing power is greater than the demand computing power.
Since the above has described the quantization of the demand computing force of the computing task, the remaining computing force of the plurality of distributed computing nodes can be determined, and then the target computing node whose consumed time is less than the demand processing time and whose remaining computing force is greater than the demand computing force can be determined from the plurality of distributed computing nodes, so that the delay of passing through the nodes can be reduced, a better line can be selected, and the computation task transmission can be avoided from being born by the computing force tension node.
Step 405, calling the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 5, a schematic structural diagram of a task processing device according to an embodiment of the present invention may specifically include the following modules:
the computing power network construction module 501 is used for constructing a computing power network between a central computing power pool and a plurality of distributed computing power pools; the central computing force pools are deployed with central computing nodes, and each distributed computing force pool is deployed with distributed computing nodes.
The task requirement information determining module 502 is configured to determine task requirement information for a computing task initiated by a user.
A target computing node confirmation module 503, configured to determine a target computing node from the central computing node and a plurality of distributed computing nodes according to the task demand information, so as to invoke the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
In an embodiment of the present invention, the task demand information includes a demand confidence, and the target computing node confirmation module 503 includes:
the first selecting sub-module is used for determining a target computing node from the plurality of distributed computing nodes under the condition that the required confidence coefficient is larger than a preset confidence coefficient;
and a second selecting sub-module according to the confidence coefficient, wherein the second selecting sub-module is used for determining the central computing node as a target computing node under the condition that the required confidence coefficient is smaller than or equal to a preset confidence coefficient.
In an embodiment of the present invention, the task requirement information includes a requirement processing time, and the target computing node confirmation module 503 includes:
the first selecting submodule is used for determining a target computing node from the plurality of distributed computing nodes under the condition that the required processing time is smaller than the preset time;
And the second selecting submodule is used for determining the central computing node as a target computing node under the condition that the required processing time is greater than or equal to the preset time.
In an embodiment of the present invention, the first time-dependent selection sub-module includes:
a time-consuming determining unit, configured to predict a time consumption of the plurality of distributed computing nodes in the computing task;
and according to the time-consuming selection unit, determining a target computing node with time consumption smaller than the required processing time from the plurality of distributed computing nodes.
In an embodiment of the present invention, the task demand information includes demand force, and the task demand information determining module 502 includes:
and the calculation force quantization unit is used for carrying out calculation force quantization aiming at the calculation task initiated by the user to obtain the required calculation force.
In an embodiment of the present invention, the selecting unit according to the time consuming includes:
a remaining computing power determination subunit configured to determine remaining computing power of the plurality of distributed computing nodes;
and a combined time and force selection subunit configured to determine, from the plurality of distributed computing nodes, a target computing node for which the elapsed time is less than the required processing time and the remaining force is greater than the required force.
In an embodiment of the present invention, further includes:
and the abnormality early warning module is used for carrying out abnormality early warning on each distributed computing node and the distributed computing pool where each distributed computing node is located.
In an embodiment of the present invention, the abnormality early warning module includes:
the probability determination submodule is used for obtaining the probability of abnormal states and the probability of normal states in the sample set aiming at each target distributed computing node and the distributed computing pool;
and the probability early warning sub-module is used for carrying out abnormal early warning on the distributed computing nodes and the distributed computing force pool where the distributed computing nodes are positioned under the condition that the probability of the abnormal state is larger than that of the normal state.
In the embodiment of the invention, a central computing node is deployed in the central computing pool by constructing a computing power network between the central computing pool and a plurality of distributed computing pools, each distributed computing pool is deployed with a distributed computing node, task demand information is determined for a computing task initiated by a user, and a target computing node is determined from the central computing node and the plurality of distributed computing nodes according to the task demand information so as to call the target computing node to process the computing task through the computing power network; the target computing node is used for calling the computing power of the computing power pool to calculate, so that the distributed computing power resources are applied by constructing a computing power network, the application flexibility and the utilization rate of the computing power resources are improved, the computing power scheduling is performed by the result task demands, and the application efficiency of the computing power resources is improved.
An embodiment of the present invention also provides an electronic device, which may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, where the computer program implements the method of task processing as above when executed by the processor.
An embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements a method for task processing as above.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has outlined rather broadly the more detailed description of a method and apparatus for task processing that follows, in order that the detailed description may be better understood as being a function of the present principles and embodiments of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A method of task processing, the method comprising:
constructing a computing power network among the central computing power pool and the plurality of distributed computing power pools; the central computing force pools are provided with central computing nodes, and each distributed computing force pool is provided with distributed computing nodes;
determining task demand information aiming at a computing task initiated by a user;
determining a target computing node from the central computing node and a plurality of distributed computing nodes according to the task demand information, so as to call the target computing node to process the computing task through the computing network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
2. The method of claim 1, wherein the task demand information includes demand confidence, and wherein the determining a target computing node from the central computing node and a plurality of distributed computing nodes based on the task demand information comprises:
determining a target computing node from the plurality of distributed computing nodes when the required confidence coefficient is greater than a preset confidence coefficient;
and under the condition that the demand confidence coefficient is smaller than or equal to a preset confidence coefficient, determining the central computing node as a target computing node.
3. The method of claim 1, wherein the task demand information includes demand processing time, and wherein the determining a target computing node from the central computing node and a plurality of distributed computing nodes based on the task demand information comprises:
determining a target computing node from the plurality of distributed computing nodes when the required processing time is less than a preset time;
and under the condition that the required processing time is greater than or equal to the preset time, determining the central computing node as a target computing node.
4. The method of claim 3, wherein the determining a target computing node from the plurality of distributed computing nodes comprises:
Predicting the time spent for a plurality of distributed computing nodes to be at the computing task;
determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time.
5. The method of claim 4, wherein the task demand information includes demand computing power, wherein the determining task demand information for a user-initiated computing task includes:
aiming at a calculation task initiated by a user, carrying out calculation force quantification to obtain a demand calculation force;
the determining, from the plurality of distributed computing nodes, the target computing node that consumes less time than the demand processing time, comprising:
determining remaining computing power of the plurality of distributed computing nodes;
from the plurality of distributed computing nodes, determining a target computing node for which the elapsed time is less than the demand processing time and the remaining computing power is greater than the demand computing power.
6. The method according to any one of claims 1 to 5, further comprising:
and carrying out abnormal early warning on each distributed computing node and the distributed computing pool where each distributed computing node is located.
7. The method according to claim 2, wherein the performing anomaly early warning on each distributed computing node and the distributed computing pool comprises:
Aiming at each target distributed computing node and the distributed computing pool where the target distributed computing node is positioned, obtaining the probability of abnormal state occurrence and the probability of normal state occurrence in a sample set;
and under the condition that the probability of the abnormal state is larger than that of the normal state, carrying out abnormal early warning on the distributed computing nodes and the distributed computing power pool where the distributed computing nodes are positioned.
8. An apparatus for task processing, the apparatus comprising:
the computing power network construction module is used for constructing a computing power network between the central computing power pool and the plurality of distributed computing power pools; the central computing force pools are provided with central computing nodes, and each distributed computing force pool is provided with distributed computing nodes;
the task demand information determining module is used for determining task demand information aiming at a computing task initiated by a user;
the target computing node confirmation module is used for determining a target computing node from the central computing node and the distributed computing nodes according to the task demand information so as to call the target computing node to process the computing task through the computing network; the target computing node is used for calling the computing power of the computing power pool to perform computing.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements a method of task processing according to any one of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements a method of task processing according to any of claims 1 to 7.
CN202211632581.1A 2022-12-19 2022-12-19 Task processing method and device Pending CN116302469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211632581.1A CN116302469A (en) 2022-12-19 2022-12-19 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211632581.1A CN116302469A (en) 2022-12-19 2022-12-19 Task processing method and device

Publications (1)

Publication Number Publication Date
CN116302469A true CN116302469A (en) 2023-06-23

Family

ID=86776840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211632581.1A Pending CN116302469A (en) 2022-12-19 2022-12-19 Task processing method and device

Country Status (1)

Country Link
CN (1) CN116302469A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453388A (en) * 2023-07-21 2024-01-26 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method
CN117540337A (en) * 2023-11-15 2024-02-09 中国铁塔股份有限公司辽宁省分公司 Multi-source fusion intelligent regional safety and precision sensing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453388A (en) * 2023-07-21 2024-01-26 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method
CN117453388B (en) * 2023-07-21 2024-02-27 广东奥飞数据科技股份有限公司 Distributed computing power intelligent scheduling system and method
CN117540337A (en) * 2023-11-15 2024-02-09 中国铁塔股份有限公司辽宁省分公司 Multi-source fusion intelligent regional safety and precision sensing method
CN117540337B (en) * 2023-11-15 2024-04-16 中国铁塔股份有限公司辽宁省分公司 Multi-source fusion intelligent regional safety and precision sensing method

Similar Documents

Publication Publication Date Title
CN116302469A (en) Task processing method and device
US20200137151A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
Jiang et al. Optimal cloud resource auto-scaling for web applications
US9213574B2 (en) Resources management in distributed computing environment
US8352951B2 (en) Method and apparatus for utility-based dynamic resource allocation in a distributed computing system
US20070250630A1 (en) Method and a system of generating and evaluating potential resource allocations for an application
US11579933B2 (en) Method for establishing system resource prediction and resource management model through multi-layer correlations
Tran et al. A proactive cloud scaling model based on fuzzy time series and SLA awareness
US8305911B2 (en) System and method for identifying and managing service disruptions using network and systems data
CN112579194B (en) Block chain consensus task unloading method and device based on time delay and transaction throughput
CN115134368B (en) Load balancing method, device, equipment and storage medium
CN108733509A (en) Method and system for data to be backed up and restored in group system
JP2002268922A (en) Performance monitoring device of www site
CN113228574A (en) Computing resource scheduling method, scheduler, internet of things system and computer readable medium
Cheng et al. Proscale: Proactive autoscaling for microservice with time-varying workload at the edge
US20220318065A1 (en) Managing computer workloads across distributed computing clusters
Zhang et al. Optimal server resource allocation using an open queueing network model of response time
CN111800291B (en) Service function chain deployment method and device
CN115562841B (en) Cloud video service self-adaptive resource scheduling system and method
CN115277249B (en) Network security situation perception method based on cooperation of multi-layer heterogeneous network
CN113778685A (en) Unloading method for urban gas pipe network edge computing system
CN112288433B (en) Block chain consensus task processing system and method supporting edge-side cooperation
Aljulayfi et al. A Machine Learning based Context-aware Prediction Framework for Edge Computing Environments.
Nisar et al. Survey on arima model workloads in a datacenter with respect to cloud architecture
CN114531365B (en) Cloud resource automatic operation and maintenance method under multi-cloud environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination