CN115599539A - Engine scheduling method based on task amount prediction and related equipment - Google Patents

Engine scheduling method based on task amount prediction and related equipment Download PDF

Info

Publication number
CN115599539A
CN115599539A CN202211170133.4A CN202211170133A CN115599539A CN 115599539 A CN115599539 A CN 115599539A CN 202211170133 A CN202211170133 A CN 202211170133A CN 115599539 A CN115599539 A CN 115599539A
Authority
CN
China
Prior art keywords
engine
task amount
task
amount
average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211170133.4A
Other languages
Chinese (zh)
Inventor
冯竞凯
马雪娇
王成章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, Beijing Guodiantong Network Technology Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN202211170133.4A priority Critical patent/CN115599539A/en
Publication of CN115599539A publication Critical patent/CN115599539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The application provides an engine scheduling method based on task quantity prediction and related equipment, wherein the method comprises the following steps: predicting the total predicted tasks in a future preset time interval according to the total historical tasks processed by the engine cluster until the current moment; calculating the average task amount of each engine node in the engine cluster according to the total predicted task amount; and responding to the fact that the average task amount exceeds a preset average task amount, and adjusting the number of engine nodes in the engine cluster according to the total predicted task amount and the average task amount.

Description

Engine scheduling method based on task amount prediction and related equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an engine scheduling method based on task amount prediction and a related device.
Background
With the continuous development of information technology, the workflow engine plays an increasingly important role as a core of data processing in the task distribution system of the present day. Existing workflow engines are often combined with cloud computing technology, however, in actual production life, due to different task request amounts, different workflow engines deployed in a cloud computing environment have different resource requirements for cloud computing. In the face of continuous change of task quantity, the demands of the workflow engine for task processing personnel are different, the traditional workflow engine cannot realize load balance of the workflow engine due to the fact that the task quantity cannot be predicted, when the task quantity is large, load balance of the workflow engine cannot be well realized, when the task quantity is low, the workflow engine nodes with fixed resources occupy too many cloud computing resources, and therefore resource waste is caused. In addition, the workflow engine occupies too much memory at the server side in the operation process, and the traditional workflow engine adopts a fixed resource amount operation mode under the cloud computing environment, so that the problems of low cloud computing resource utilization, poor reusability and the like can be caused. Therefore, the problems of long workflow execution time, low utilization rate of cloud computing resources, uneven load distribution and the like can be avoided by reasonably scheduling the workflow engine.
Disclosure of Invention
In view of the above, an object of the present application is to provide an engine scheduling method based on task amount prediction and a related device, so as to solve the problem of reasonably scheduling a workflow engine according to a predicted task amount.
In view of the above, the present application provides an engine scheduling method based on task amount prediction, the method including:
predicting the total predicted tasks in a future preset time interval according to the total historical tasks processed by the engine cluster until the current moment;
calculating the average task amount of each engine node in the engine cluster according to the total predicted task amount;
and responding to the fact that the average task amount exceeds a preset average task amount, and adjusting the number of engine nodes in the engine cluster according to the total predicted task amount and the average task amount.
Based on the same inventive concept, the exemplary embodiments of the present application further provide an engine scheduling apparatus based on task amount prediction, the apparatus including:
the task quantity prediction module is configured to predict the total quantity of predicted tasks in a preset time interval in the future according to the historical total quantity of tasks processed by the engine cluster at the current moment;
the task quantity calculation module is configured to calculate the average task quantity of each engine node in the engine cluster according to the predicted task total quantity;
and the engine scheduling module is configured to respond to the fact that the average task amount exceeds a preset average task amount, and adjust the number of the engine nodes in the engine cluster according to the predicted total task amount and the average task amount.
Based on the same inventive concept, the exemplary embodiments of this application further provide an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the processor implements the engine scheduling method based on task amount prediction as described in any one of the above.
Based on the same inventive concept, the exemplary embodiments of this application also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the engine scheduling method based on task amount prediction as described in any one of the above.
As can be seen from the foregoing, according to the engine scheduling method based on task amount prediction and the related device provided by the present application, a predicted task total amount in a future predetermined time interval is predicted according to a historical task total amount processed by an engine cluster at a current time, an average task amount of each engine node in the engine cluster is calculated according to the predicted task total amount, and in response to that the average task amount exceeds a preset average task amount, the number of engine nodes in the engine cluster is adjusted according to the predicted task total amount and the average task amount. According to the method and the device, the task quantity in a period of time in the future is predicted according to the historical task quantity data of the workflow engine, whether the average task quantity required to be processed by each engine node in the engine cluster at the current moment according to the future predicted task total quantity exceeds the preset average task quantity is further judged, and therefore dynamic planning of the workflow engine is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the present application or prior art, the drawings used in the embodiments or prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only the present application, and other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a schematic view of an application scenario of an engine scheduling method based on task amount prediction according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an engine scheduling method based on task amount prediction according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating example parameters of a working system of an engine scheduling method based on task amount prediction according to an embodiment of the present application.
Fig. 4 is another schematic flowchart of an engine scheduling method based on task amount prediction according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an engine scheduling apparatus based on task amount prediction according to an embodiment of the present application.
Fig. 6 is a more specific hardware structure diagram of the electronic device according to the embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be described in further detail below with reference to specific embodiments and the accompanying drawings.
It is to be noted that, unless otherwise defined, technical or scientific terms used herein should have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. As used in this application, the terms "first," "second," and the like do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item preceding the word comprises the element or item listed after the word and its equivalent, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, the cloud workflow engine deployment manner of fixed computing resources may cause resource waste or cloud computing resource shortage under different task volumes, and in addition, in the face of continuous change of task volumes, the conventional workflow engine cannot predict the future task volume, so that the good load balance of the workflow engine cannot be realized.
In the process of implementing the present disclosure, the applicant finds that, currently, for the scheduling problem of the workflow engine, the role or the person corresponding to the task on each node is often specified in the process file, and this method is simple, but cannot be applied to a system with a complex business process, and cannot dynamically allocate the task from the engine node. Meanwhile, the traditional workflow scheduling method cannot comprehensively consider the demands and workload of workers during task execution and the problem that the workflow completion time is prolonged due to the rapid increase of the workload.
Hereinafter, the technical means of the present disclosure will be described in further detail with reference to specific examples.
Referring to fig. 1, a schematic view of an application scenario of an engine scheduling method based on task amount prediction according to an embodiment of the present application is shown.
As shown in the figure, an engine cluster includes a plurality of distributed engine nodes, at least one task handler is distributed on each engine node, when there is a task to be processed, the total task amount is distributed to each engine node, and then the engine nodes distribute to the processing terminal devices of their respective task handlers, the engine nodes in the engine cluster can be set according to the actual total task amount, and the task handlers on the engine nodes can also be flexibly set according to the average task amount on the engine nodes.
Referring to fig. 2, a flowchart of an engine scheduling method based on task amount prediction according to an embodiment of the present application is schematically shown.
Step S201, predicting the total predicted tasks in a future preset time interval according to the historical total tasks processed by the engine cluster till the current time.
The workflow engine can be understood as a software system for realizing workflow logic, provides environmental support through computer technology, realizes definition, execution and management of workflows, and effectively coordinates information interaction among engine nodes in the whole workflow execution process. In the embodiment of the application, the workflow engine is deployed in the engine cluster in an engine node mode.
As an optional embodiment, in order to achieve reasonable prediction of the task amount arriving in the future, in the embodiment of the present application, the future task amount is predicted by inputting the historical task amount of the engine cluster into a pre-constructed task amount prediction model, specifically:
firstly, acquiring the total amount of historical tasks processed by the engine cluster at the current moment;
further, inputting the historical task total amount into a pre-constructed task amount prediction model to obtain the predicted task total amount in a future preset time interval; the pre-constructed task quantity prediction model can be, but is not limited to, predicting the total quantity of predicted tasks in a future preset time interval by using a Long Short-Term Memory network algorithm (LSTM), and the preset time interval can be flexibly set according to actual requirements.
Step S202, calculating the average task amount of each engine node in the engine cluster according to the predicted task amount.
In a specific implementation, when the predicted total number of tasks in a future predetermined time interval is obtained, the average task amount of each engine node may be calculated according to the number of engine nodes in the engine cluster at the current time, for example, if the predicted total number of tasks is 1000, and the number of engine nodes in the engine cluster at the current time is 10, the average task amount of each engine node is 100.
In specific implementation, the task amount may be allocated according to the processing performance of the engine node, the engine node with high processing performance may allocate more task amounts, and the engine node with low processing performance may allocate less task amounts.
As an alternative embodiment, further, the required amount of task handlers on the engine node may be estimated based on the average task amount.
Step S203, responding to the fact that the average task amount exceeds a preset average task amount, and adjusting the number of engine nodes in the engine cluster according to the total predicted task amount and the average task amount.
As an optional embodiment, in response to that the average task amount exceeds a preset average task amount, sending out a task amount surge warning, and increasing the number of engine nodes in the engine cluster according to the predicted total task amount and the average task amount; and the task volume surge early warning is used for prompting the staff that the average task volume exceeds a preset average task volume.
Specifically, when the average task amount exceeds the preset average task amount, it is indicated that the number of engine nodes at the current time is not enough to process the task amount arriving in the future, and therefore, the number of engine nodes in the current engine cluster needs to be increased according to the predicted task amount and the average task amount, for example, if the predicted task amount is 1000 pieces and the number of engine nodes in the current engine cluster is 10, the average task amount of each engine node is 100 pieces, and the preset average task amount is 50 pieces, since the preset average task amount is smaller than the average task amount, the number of engine nodes in the engine cluster needs to be increased, and further it is ensured that the average task amount does not exceed the preset average task amount, if 30 engine nodes are increased, the number of engine nodes in the current engine cluster is 40, and if the predicted task amount is 1000 pieces, the average task amount of each engine node is 25 pieces, and it can be ensured that all the engine nodes in the current engine cluster can process the predicted task amount arriving in the future.
It should be noted that, when the average task amount exceeds the preset average task amount, the engine cluster system may send a task amount surge warning to the management terminal device to prompt the worker that the average task amount exceeds the preset average task amount, and adjust the engine node according to the actual requirement.
As an optional embodiment, in response to that the average task amount does not exceed a preset average task amount, determining whether the load task amount of the engine node exceeds a preset load task amount upper limit.
Specifically, when the average task amount does not exceed the preset average task amount, it needs to further determine whether the load task amount of the engine node exceeds a preset load task amount upper limit, where the preset load task amount upper limit may be set according to parameters of the CPU performance, the memory usage rate, the disk occupation space, the network condition, the GPU performance, and the like of the working system.
Referring to fig. 3, a schematic diagram of parameters of an operating system according to an embodiment of the present application is provided.
For example, if the memory usage rate of the current working system exceeds 80%, it is determined that the load task amount of the engine node exceeds a preset load task amount upper limit;
further, in response to that the load task amount of the engine node exceeds a preset load task amount upper limit, the number of the engine nodes in the engine cluster is increased according to the load task amount of the engine node and the preset load task amount upper limit.
It should be noted that the load task amount of the engine node may be compared with a preset load task amount upper limit, if the load task amount of the engine node exceeds a preset comparison multiple of the preset load task amount upper limit, the number of the engine nodes in the engine cluster may be directly increased, and if the load task amount of the engine node does not exceed the preset comparison multiple of the preset load task amount upper limit, the number of task processing staff may be expanded according to the required amount of the task processing staff on the engine node to shunt the task amount exceeding the part.
As an optional embodiment, in response to that the load task amount of the engine node does not exceed the preset load task amount upper limit, it is determined whether the load task amount of the engine node is lower than the preset load task amount lower limit.
Specifically, when the load task amount of the engine node does not exceed the preset load task amount upper limit, it needs to further determine whether the load task amount of the engine node is lower than the preset load task amount lower limit, and the preset load task amount lower limit may also be set according to parameters of the working system, such as CPU performance, memory usage rate, disk occupation space, network conditions, and GPU performance.
For example, if the memory usage rate of the current working system is lower than 20%, determining that the load task amount of the engine node does not exceed a preset load task amount upper limit;
further, in response to that the load task amount of the engine node does not exceed the preset load task amount lower limit, it indicates that the number of the engine nodes in the engine cluster is sufficient at this time, and even some of the engine nodes are in an idle state, which may cause resource waste of the engine nodes, and therefore, the number of the engine nodes in the engine cluster may be reduced according to the load task amount of the engine nodes and the preset load task amount lower limit.
It should be noted that the load task amount of the engine node may be compared with a preset lower limit of the load task amount, if the load task amount of the engine node is lower than a lowest preset comparison multiple of the preset lower limit of the load task amount, the number of the engine nodes in the engine cluster may be directly reduced, and if the load task amount of the engine node does not exceed the lowest preset comparison multiple of the preset lower limit of the load task amount, the number of task handlers may be reduced according to the required amount of the task handlers on the engine node to save resources for task workers in an idle part.
As an optional embodiment, in response to that the load task amount of the engine node is lower than the preset load task amount lower limit, a task amount reduction early warning is sent; and the task amount reducing early warning is used for prompting the working personnel that the load task amount of the engine node is lower than the preset load task amount lower limit.
Specifically, when the load task amount of the engine node is lower than the preset load task amount lower limit, the engine cluster system may send a task amount reduction early warning to the management terminal device to prompt a worker that the load task amount of the engine node is lower than the preset load task amount lower limit, and the engine node is adjusted according to actual requirements.
As an optional embodiment, in response to that the load task amount of the engine node is not lower than the preset load task amount lower limit, it indicates that the current engine node resource is capable of processing the current task amount, and does not cause waste of the engine node resource, and at this time, the number of the engine nodes does not need to be scheduled.
As an optional embodiment, after engine nodes in the engine cluster are reasonably scheduled, the number of the engine nodes in the engine cluster and the load task amount of the engine nodes may be further continuously monitored, and the number of the engine nodes in the engine cluster and the load task amount of the engine nodes are continuously uploaded to the terminal device, so that the terminal device further predicts a future task prediction total amount according to continuously collected data, so as to perform a new round of engine node scheduling.
Referring to fig. 4, another flowchart of an engine scheduling method based on task amount prediction according to an embodiment of the present application is schematically shown.
It should be noted that, the flow in fig. 4 of the present application has been described in detail in only step S203 of step S201, and is not described herein again, and the meaning of fig. 4 of the present application lies in that the embodiments in the present application are combined and illustrated schematically.
As can be seen from the foregoing, according to the engine scheduling method based on task amount prediction and the related device provided by the present application, a predicted task total amount in a future predetermined time interval is predicted according to a historical task total amount processed by an engine cluster at a current time, an average task amount of each engine node in the engine cluster is calculated according to the predicted task total amount, and in response to that the average task amount exceeds a preset average task amount, the number of engine nodes in the engine cluster is adjusted according to the predicted task total amount and the average task amount. According to the method and the device, the task quantity in a period of time in the future is predicted according to the historical task quantity data of the workflow engine, whether the average task quantity required to be processed by each engine node in the engine cluster at the current moment according to the future predicted task total quantity exceeds the preset average task quantity is further judged, and therefore dynamic planning of the workflow engine is achieved. Compared with the traditional workflow engine scheduling mode, the method and the system have the advantages that the historical task quantity data are subjected to predictive analysis in a predictive mode, so that the task quantity in a period of time in the future is obtained, the workflow engine is planned in advance, the dynamic expansion and contraction of the workflow engine nodes can be realized, and the utilization rate of cloud computing resources is improved. In addition, the method and the device can timely send out early warning under the condition that the task amount is increased or decreased, so that workers can timely adjust task amount distribution or scheduling of the engine nodes, and the scheduling flexibility is improved.
It should be noted that the method of the embodiment of the present application may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In this distributed scenario, one device of the multiple devices may only perform one or more steps of the method of the embodiment of the present application, and the multiple devices interact with each other to complete the method.
It should be noted that the above describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to the method of any embodiment, the application also provides an engine scheduling device based on task amount prediction.
Referring to fig. 5, a schematic structural diagram of an engine scheduling apparatus based on task amount prediction according to an embodiment of the present application is provided.
The device comprises:
a task amount prediction module 501 configured to predict a predicted task amount within a predetermined time interval in the future according to a historical task amount processed by the engine cluster until the current time;
a task amount calculation module 502 configured to calculate an average task amount of each engine node in the engine cluster according to the predicted total task amount;
an engine scheduling module 503 configured to adjust the number of engine nodes in the engine cluster according to the predicted total number of tasks and the average task amount in response to the average task amount exceeding a preset average task amount.
In some exemplary embodiments, the task amount prediction module 501 is further configured to:
acquiring the total amount of the historical tasks processed by the engine cluster at the current moment;
and inputting the historical task total amount into a pre-constructed task amount prediction model to obtain the predicted task total amount in a future preset time interval.
In some exemplary embodiments, the engine scheduling module 503 is further configured to:
responding to the fact that the average task amount exceeds a preset average task amount, sending out a task amount surge early warning, and increasing the number of engine nodes in the engine cluster according to the predicted task total amount and the average task amount; and the task volume surge early warning is used for prompting the staff that the average task volume exceeds a preset average task volume.
In some exemplary embodiments, the engine scheduling module 503 is further configured to:
responding to the fact that the average task amount does not exceed a preset average task amount, and determining whether the load task amount of the engine node exceeds a preset load task amount upper limit or not;
and in response to the fact that the load task amount of the engine nodes exceeds the preset load task amount upper limit, increasing the number of the engine nodes in the engine cluster according to the load task amount of the engine nodes and the preset load task amount upper limit.
In some exemplary embodiments, the engine scheduling module 503 is further configured to:
responding to the fact that the load task amount of the engine node does not exceed a preset load task amount upper limit, and determining whether the load task amount of the engine node is lower than a preset load task amount lower limit or not;
and in response to the fact that the load task amount of the engine nodes is lower than the preset load task amount lower limit, reducing the number of the engine nodes in the engine cluster according to the load task amount of the engine nodes and the preset load task amount lower limit.
In some exemplary embodiments, the engine scheduling module 503 is further configured to:
responding to the fact that the load task amount of the engine node is lower than the preset load task amount lower limit, and sending out a task amount reduction early warning; and the task quantity reducing early warning is used for prompting a worker that the load task quantity of the engine node is lower than the preset load task quantity lower limit.
In some exemplary embodiments, the task amount prediction module 501 is further configured to:
and continuously monitoring the number of the engine nodes in the engine cluster and the load task amount of the engine nodes, and continuously uploading the number of the engine nodes in the engine cluster and the load task amount of the engine nodes to terminal equipment.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations as the present application.
The apparatus in the foregoing embodiment is used to implement the corresponding engine scheduling method based on task amount prediction in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to the method of any embodiment described above, the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the engine scheduling method based on task amount prediction according to any embodiment described above is implemented.
Fig. 6 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present specification are implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called by the processor 1010 for execution.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The input/output module may be configured as a component within the device (not shown) or may be external to the device to provide corresponding functionality. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, bluetooth and the like).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device in the foregoing embodiment is used to implement the corresponding engine scheduling method based on task amount prediction in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-mentioned embodiment methods, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the engine scheduling method based on task amount prediction as described in any of the above embodiments.
The non-transitory computer readable storage medium may be any available medium or data storage device that can be accessed by a computer, including but not limited to magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memories (NAND FLASH), solid State Disks (SSDs)), etc.
The computer instructions stored in the storage medium of the above embodiment are used to enable the computer to execute the engine scheduling method based on task amount prediction according to any embodiment of the above exemplary method, and have the beneficial effects of corresponding method embodiments, which are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be embodied as a system, method or computer program product. Thus, the present application may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or a combination of hardware and software, and is referred to herein generally as a "circuit," module "or" system. Furthermore, in some embodiments, the present application may also be embodied as a computer program product in one or more computer-readable media having computer-readable program code embodied therein.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive example) of the computer readable storage medium may include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Use of the verb "comprise", "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
While the spirit and principles of the application have been described with reference to several particular embodiments, it is to be understood that the application is not limited to the specific embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects cannot be combined to advantage. The application is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (10)

1. An engine scheduling method based on task quantity prediction is characterized by comprising the following steps:
predicting the total predicted tasks in a future preset time interval according to the total historical tasks processed by the engine cluster until the current moment;
calculating the average task amount of each engine node in the engine cluster according to the total predicted task amount;
and responding to the fact that the average task amount exceeds a preset average task amount, and adjusting the number of engine nodes in the engine cluster according to the total predicted task amount and the average task amount.
2. The method of claim 1, wherein predicting a predicted total number of tasks in a predetermined time interval in the future based on a historical total number of tasks comprises:
acquiring the total amount of historical tasks processed by the engine cluster at the current moment;
and inputting the historical task total amount into a pre-constructed task amount prediction model to obtain the predicted task total amount in a future preset time interval.
3. The method of claim 1, wherein adjusting the number of engine nodes in the engine cluster based on the predicted total number of tasks and the average task volume in response to the average task volume exceeding a preset average task volume comprises:
responding to the fact that the average task amount exceeds a preset average task amount, sending out a task amount surge early warning, and increasing the number of engine nodes in the engine cluster according to the predicted task total amount and the average task amount; the task volume surge early warning is used for prompting the working personnel that the average task volume exceeds a preset average task volume.
4. The method of claim 3, further comprising:
responding to the fact that the average task amount does not exceed a preset average task amount, and determining whether the load task amount of the engine node exceeds a preset load task amount upper limit or not;
and in response to the load task amount of the engine node exceeding the preset load task amount upper limit, increasing the number of the engine nodes in the engine cluster according to the load task amount of the engine node and the preset load task amount upper limit.
5. The method of claim 4, further comprising:
responding to the fact that the load task amount of the engine node does not exceed a preset load task amount upper limit, and determining whether the load task amount of the engine node is lower than a preset load task amount lower limit or not;
and in response to the fact that the load task amount of the engine nodes is lower than the preset load task amount lower limit, reducing the number of the engine nodes in the engine cluster according to the load task amount of the engine nodes and the preset load task amount lower limit.
6. The method of claim 5, further comprising:
responding to the fact that the load task amount of the engine node is lower than the preset load task amount lower limit, and sending out a task amount reduction early warning; and the task quantity reducing early warning is used for prompting a worker that the load task quantity of the engine node is lower than the preset load task quantity lower limit.
7. The method of claim 1, further comprising:
and continuously monitoring the number of the engine nodes in the engine cluster and the load task amount of the engine nodes, and continuously uploading the number of the engine nodes in the engine cluster and the load task amount of the engine nodes to terminal equipment.
8. An engine scheduling apparatus based on task amount prediction, comprising:
the task quantity prediction module is configured to predict the total predicted task quantity in a future preset time interval according to the historical task quantity processed by the engine cluster at the current moment;
the task quantity calculation module is configured to calculate the average task quantity of each engine node in the engine cluster according to the predicted total task quantity;
and the engine scheduling module is configured to respond to the fact that the average task amount exceeds a preset average task amount, and adjust the number of the engine nodes in the engine cluster according to the predicted total task amount and the average task amount.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202211170133.4A 2022-09-22 2022-09-22 Engine scheduling method based on task amount prediction and related equipment Pending CN115599539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211170133.4A CN115599539A (en) 2022-09-22 2022-09-22 Engine scheduling method based on task amount prediction and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211170133.4A CN115599539A (en) 2022-09-22 2022-09-22 Engine scheduling method based on task amount prediction and related equipment

Publications (1)

Publication Number Publication Date
CN115599539A true CN115599539A (en) 2023-01-13

Family

ID=84845156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211170133.4A Pending CN115599539A (en) 2022-09-22 2022-09-22 Engine scheduling method based on task amount prediction and related equipment

Country Status (1)

Country Link
CN (1) CN115599539A (en)

Similar Documents

Publication Publication Date Title
US11265369B2 (en) Methods and systems for intelligent distribution of workloads to multi-access edge compute nodes on a communication network
US10789102B2 (en) Resource provisioning in computing systems
CN102541460B (en) Multiple disc management method and equipment
US8615764B2 (en) Dynamic system scheduling
US9037880B2 (en) Method and system for automated application layer power management solution for serverside applications
RU2697700C2 (en) Equitable division of system resources in execution of working process
US10367719B2 (en) Optimized consumption of third-party web services in a composite service
CN111338791A (en) Method, device and equipment for scheduling cluster queue resources and storage medium
CN117852707A (en) Shaping of computational loads with virtual capacity and preferred location real-time scheduling
CN104168133A (en) Method and system for dynamic API page view configuration, and gateway
CN113228574A (en) Computing resource scheduling method, scheduler, internet of things system and computer readable medium
CN109739627B (en) Task scheduling method, electronic device and medium
CN112529309A (en) Cloud data center intelligent management system
CN109840141A (en) Thread control method, device, electronic equipment and storage medium based on cloud monitoring
CN111930516B (en) Load balancing method and related device
CN116402318B (en) Multi-stage computing power resource distribution method and device for power distribution network and network architecture
CN110347546B (en) Dynamic adjustment method, device, medium and electronic equipment for monitoring task
CN115599539A (en) Engine scheduling method based on task amount prediction and related equipment
CN110806918A (en) Virtual machine operation method and device based on deep learning neural network
CN115658287A (en) Method, apparatus, medium, and program product for scheduling execution units
US20230168940A1 (en) Time-bound task management in parallel processing environment
CN113296907B (en) Task scheduling processing method, system and computer equipment based on clusters
CN114942833A (en) Method and related device for dynamically scheduling timing task resources
CN114138444A (en) Task scheduling method, device, equipment, storage medium and program product
CN114090201A (en) Resource scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination