CN111338773B - Distributed timing task scheduling method, scheduling system and server cluster - Google Patents

Distributed timing task scheduling method, scheduling system and server cluster Download PDF

Info

Publication number
CN111338773B
CN111338773B CN202010107644.6A CN202010107644A CN111338773B CN 111338773 B CN111338773 B CN 111338773B CN 202010107644 A CN202010107644 A CN 202010107644A CN 111338773 B CN111338773 B CN 111338773B
Authority
CN
China
Prior art keywords
task
node
service
message queue
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010107644.6A
Other languages
Chinese (zh)
Other versions
CN111338773A (en
Inventor
李红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Co ltd
Original Assignee
Huayun Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Co ltd filed Critical Huayun Data Co ltd
Priority to CN202010107644.6A priority Critical patent/CN111338773B/en
Publication of CN111338773A publication Critical patent/CN111338773A/en
Application granted granted Critical
Publication of CN111338773B publication Critical patent/CN111338773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a distributed timing task scheduling method, a scheduling system and a server cluster, wherein the method comprises the steps of respectively configuring independent flow scheduling processes, service scheduling processes and task timers in at least two nodes; selecting one node as a designated node to respond to the timing task, issuing the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue; the flow scheduling process determines whether a new designated node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and the service scheduling process in the node selected by the message queue responds to the timed task. The invention effectively solves the defect that the timing task has too high dependence on resources in the scheduling process, so as to realize the scheduling consistency of the timing task, and overcomes the defect that the timing task cannot be effectively responded due to node faults in the scheduling process.

Description

Distributed timing task scheduling method, scheduling system and server cluster
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a distributed timing task scheduling method, a scheduling system, and a server cluster.
Background
One or more Project projects (projects) are run in the cloud platform, server cluster configuration environment, and the Project projects need to configure timing tasks (tasks). Based on the situations that a plurality of projects and a plurality of tasks contained in a single project exist in a large-scale cloud platform and server cluster configuration environment, tasks such as overtime state judgment of an order system, timing update of cache data, timing mail sending to a user, even some reports calculated regularly and the like are needed. To ensure high availability and high fault tolerance, large-scale cloud platforms or server clusters typically employ a distributed architecture, and thus distributed timing tasks have evolved.
Currently, the technical route of distributed timing task scheduling of the comparative mainstream is based on the Quartz framework. Quartz stores multiple node tasks into a database when distributed task scheduling is achieved. When a task is executed, quartz calls a trigger from a database to execute the task, and if the name and the execution time of the trigger are the same, the task is executed from only one node. If the node fails to execute the task, the task is distributed to another node for execution, thereby realizing distributed task scheduling. Meanwhile, in order to solve the problem of high availability of the cloud platform and the server cluster in the prior art, a Zookeeper can be adopted to realize data slicing, so that the data can not be repeatedly processed, and the data processing speed is improved. This has a significant role in the financial industry, mobile payment, etc.
However, the technical route of adopting Zookeeper can cause great pressure on the database, and the database is at risk of downtime; in the technical route adopting the Quartz, the dependence on the database is strong, and the database deployment is complex; particularly, when the Quartz frame is adopted to realize the distributed timing task scheduling, the occupation of resources is too heavy, the uniqueness in the task scheduling process cannot be ensured, and the technical means of the distributed database in the prior art can cause calculation overhead and calculation pressure to the distributed database.
In view of the foregoing, there is a need for an improved distributed timing task scheduling method and related similar technical solutions in the prior art to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to disclose a distributed timing task scheduling method, a scheduling system and a server cluster, which are used for overcoming the defects in the prior art, and particularly for solving the defect that the timing task has too high dependence on resources in the scheduling process; meanwhile, the method and the device aim to improve the fault tolerance of the node running the timing task and reduce the difficulty of timing task scheduling so as to achieve the consistency of task scheduling.
To achieve the first object, the present invention provides a distributed timing task scheduling method, including:
an independent flow scheduling process, a service scheduling process and a task timer are respectively configured in at least two nodes;
selecting one node as a designated node to respond to the timing task, issuing the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue;
the flow scheduling process determines whether a new designated node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and the service scheduling process in the node selected by the message queue responds to the timed task.
As a further improvement of the invention, after the timing task is issued to the message queue, monitoring is established between all nodes and the message queue, and the flow scheduling process determines whether a new designated node is reselected by the message queue based on a polling mechanism according to the time limit set by the task timer so as to respond to the timing task by the service scheduling process contained in the new designated node.
As a further improvement of the invention, when a timing task cannot be responded by a service scheduling process of a designated node, based on a distributed lock corresponding to the timing task, a flow scheduling process is used for determining whether a new designated node is reselected from the rest nodes by a polling mechanism in a message queue according to a time limit set by a task timer so as to respond to the timing task by the service scheduling process contained in the new designated node.
As a further improvement of the present invention, before the at least two nodes are respectively configured with the independent flow scheduling process, the service scheduling process and the task timer, the method further includes: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
As a further improvement of the invention, the node configures a pair of flow scheduling process and service scheduling process, and decouples the flow scheduling process and the service scheduling process by using a message queue;
the message queue is RabbitMQ.
As a further improvement of the invention, the node configures a plurality of flow scheduling processes and service scheduling processes, and decouples the flow scheduling processes and the service scheduling processes by using message queues;
the message queue is RabbitMQ.
As a further improvement of the invention, the number of the multi-flow scheduling processes configured in the node and the number of the flow scheduling processes contained in the service scheduling process are larger than the number of the service scheduling processes, the service scheduling processes are configured into one or more service units for responding to the timing tasks, and the service scheduling processes and the service units have a mapping relation;
the service unit is a container, a virtual machine or a micro service;
The distributed timing task scheduling method is used for a multi-timing task scene.
As a further improvement of the invention, further comprising:
after the service scheduling process in the selected node responds to the timing task, the result corresponding to the response timing task is stored in the storage device, and the designated node is notified by the message queue.
Based on the same thought, the invention also discloses a distributed timing task scheduling system, which comprises:
the system comprises a task scheduler, a task timer, a service scheduler, a message queue and a timing task issuing component which are deployed in the node;
the timing task issuing component selects one node as a designated node to respond to the timing task, issues the timing task to the message queue, and establishes monitoring between at least one node selected by the rest nodes and the message queue; the task scheduler comprises a flow scheduling process which determines whether a new designated node is reselected by the message queue based on a retry mechanism according to a time limit set by the task timer, and a service scheduler comprising a service scheduling process in the selected node of the message queue responds to the timed task.
As a further improvement of the present invention,
And a task scheduler determines whether a new designated node is reselected by the message queue based on a polling mechanism according to the time limit set by the task timer so as to respond to the timing task by a service scheduler containing a service scheduling process in the node selected by the message queue.
As a further improvement of the present invention,
when the timing task cannot be responded by the service scheduler of the designated node, the plurality of nodes respond to the timing task by using a flow scheduling process contained in the task scheduler to determine whether a new designated node is reselected from the rest nodes by a polling mechanism in the message queue according to the time limit set by the task timer based on the distributed lock corresponding to the timing task.
As a further improvement of the present invention,
before the independent flow scheduling process, the service scheduling process and the task timer are respectively configured in at least two nodes, the method further comprises the following steps: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
As a further improvement of the present invention, the task scheduler and the service scheduler configured by the node include a pair of a flow scheduling process and a service scheduling process, and the flow scheduling process and the service scheduling process are decoupled by using a message queue;
the message queue is RabbitMQ.
As a further improvement of the present invention, the task scheduler and the service scheduler configured by the node include a plurality of pairs of procedure scheduling processes and service scheduling processes, and the plurality of pairs of procedure scheduling processes and service scheduling processes are coupled to the message queue;
the message queue is RabbitMQ.
As a further improvement of the invention, the number of the flow scheduling processes is larger than the number of the service scheduling processes, the service scheduling processes are configured into one or more service units for responding to the timing tasks, and the service scheduling processes and the service units have a mapping relation;
the service unit is a container, a virtual machine or a micro service;
the distributed timed task scheduling system is configured to respond to a multi-timed task scenario.
As a further improvement of the present invention, there is also included:
the storage device is used for storing a result corresponding to the response timing task into the storage device after the service scheduling process in the selected node responds to the timing task, and the message queue is used for notifying the designated node; the storage device is selected from a JVM memory, a distributed storage component, or a database.
Finally, the invention also discloses a server cluster,
the server cluster is configured with at least two nodes,
the server cluster operates the distributed timing task scheduling method as invented in the first invention.
Compared with the prior art, the invention has the beneficial effects that:
the distributed timing task scheduling method, the scheduling system and the server cluster effectively solve the defect that the timing task is too high in dependence on resources in the scheduling process, improve the fault tolerance of nodes running the timing task, and reduce the difficulty of scheduling the timing task when the server cluster responds to a plurality of timing tasks, so that the consistency of the scheduling task is realized, and especially the defect that the timing task cannot respond to the timing task effectively due to node faults in the scheduling process is overcome, and the horizontal expansion capability of scheduling the timing task is realized. Meanwhile, the invention ensures that the computing device or the cloud computing platform configured with the distributed timing task scheduling system more reasonably uses the physical resources and/or virtual resources in the process of responding to the timing task to the resources, and effectively prevents the waste or idling of the physical resources and/or virtual resources.
Drawings
FIG. 1 is an overall flow chart of a distributed timing task scheduling method of the present invention;
FIG. 2 is a topology diagram of a distributed timed task scheduling system according to the present invention, and illustrates a business architecture of running the distributed timed task scheduling method illustrated in FIG. 1 based on the distributed timed task scheduling system;
FIG. 3 is a business architecture of the distributed timing task scheduling system shown in FIG. 2 in a variation;
FIG. 4 is a business architecture of the distributed timing task scheduling system shown in FIG. 2 in a second variation;
FIG. 5 is a business architecture of the distributed timing task scheduling system shown in FIG. 2 in a third variation;
FIG. 6 is a schematic diagram of a server cluster issuing a timing task to a designated node and selecting the designated node based on an external execution subject, the external execution subject being a user or administrator;
FIG. 7 is a detailed flow chart of a distributed timed task scheduling method of the present invention;
FIG. 8 is a diagram of a traffic architecture in which message queues respond to multiple timing tasks issued by a given node;
FIG. 9 is a business architecture diagram of a timed task issuing assembly;
fig. 10 is a topology of a computer readable medium.
Detailed Description
The present invention will be described in detail below with reference to the embodiments shown in the drawings, but it should be understood that the embodiments are not limited to the present invention, and functional, method, or structural equivalents and alternatives according to the embodiments are within the scope of protection of the present invention by those skilled in the art.
Before explaining the various embodiments of the invention in detail, technical terms and meanings referred to in the specification are summarized as follows.
The term'Logic for logic control"includes any physical and tangible function for performing a task. For example, each operation shown in the flowchart corresponds to a logic component for performing the operation. Operations may be performed using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), etc., and/or any combination thereof. When implemented by a computing device, the logical components represent electrical components that are physical portions of the computer system, regardless of how they are implemented.
The term'Tasks"and" in this application "Timed tasks"or"Task"has an equivalent meaning and can be replaced with" Job "in the actual code programming. The scheduling of the timing tasks can be expressed as ' 1 am half-time batch of the payment system every day, the daily clearing is carried out, the monthly clearing is carried out No. 1, or ' after goods are successfully shipped, the goods are reminded of shipping information and logistics information in the form of short messages or mails ' and other business scenes, or ' the forced recovery is carried out on cloud hosts distributed to users according to lease time limit '.
The term'Designated node"and term"New designated node"means a Node or a computing Node responding to a timing task formed based on different time points in the process of scheduling the timing task, as shown in fig. 2, if Node-1 is selected as a responding Node when the first issued timing task is selected, node-1 is a designated Node, when Node-1 cannot execute the scheduling processing of the timing task due to special reasons such as downtime, response timeout, etc., the scheduling processing of the timing task is migrated to Node-2 and/or Node-3 through the message queue 100, and then the Node-2 and/or Node-3 can be understood as a" new designated Node "designated in the application.
The term'Node"and term"Computing node"has technical equivalents".
Phrase'Is configured as"or phrase"Is configured to"includes any manner in which any kind of physical and tangible function may be constructed to perform the identified operations. The functions may be configured to perform operations using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), etc., and/or any combination thereof.
Embodiment one:
One embodiment of a distributed timing task scheduling method (hereinafter referred to as a scheduling method) according to the present invention is disclosed with reference to fig. 1, 2, and 6 to 8. The scheduling method and the second embodiment disclose a distributed timing task scheduling system (hereinafter referred to as a "scheduling system") that can be operated in a computing device/system that can respond to timing tasks, such as the server cluster 200, and the computing device/system can be further configured as a data center, a cloud platform, etc., and will be described with emphasis on the server cluster 200 as a typical example in this application, with reference to the first embodiment and the second embodiment.
Referring to fig. 1, the distributed timing task scheduling method includes the following steps S1 to S3. The distributed timing task scheduling method is used for a multi-timing task scene.
Firstly, step S1 is executed, and independent flow scheduling processes, service scheduling processes and task timers are respectively configured in at least two nodes.
Referring to fig. 2 and 7, node-1 to Node3 are configured in the server cluster 200, wherein Node-1 configures an independent flow scheduling process 821, a service scheduling process 921, and a task Timer 811 (Timer); node-2 configures an independent flow scheduling process 822, a service scheduling process 922, and a task Timer 812 (Timer); node-3 configures an independent flow scheduling process 823, service scheduling process 923, and task Timer 813 (Timer). The flow scheduling process 821 is logically run in the Task scheduler 801 (Task-scheduler), the Service scheduling process 921 is logically run in the Service scheduler 901, and the Task scheduler 801 in Node-1 may run one or more flow scheduling processes 821, the Service scheduler 901 in Node-1 may run one or more Service scheduling processes 921, node-2 and Node-3 may be configured as described in Node-1. It should be noted that, after any one of the three computing nodes Node1 to Node-3 receives the issued timing task, the timing task is issued to the message queue 100 (RabbitMQ cluster), and the other computing nodes also synchronously receive the task message stored in the message queue 100. Meanwhile, a period set by a task timer in each computing Node (for example, when a certain timing task starts to be executed and when the execution must be completed) also corresponds to the timing task, if a certain timing task is not executed within the period set by the task timer, the computing Node configuring the Service scheduler (Service-scheduler) is determined to be unable to respond to the timing task, and at this time, the timing task is transferred to a new designated Node such as Node-2 and/or Node-3 through the message queue 100 to complete the response to the timing task. In this embodiment, the "response to a timed task" or "response to a timed task" may be embodied as specific operations such as task viewing, task deletion, viewing of task history execution records, execution of tasks, and modification of task content. The task timer counts the set period, creates a timing task when the counted period is over, and issues the timing task to the message queue 100 as shown by arrows task1 to task3 in fig. 2.
Independent flow scheduling processes, service scheduling processes and task timers are respectively configured in at least two nodes, so that the execution of timing tasks by other computing nodes can be ensured when Node-1 is down, and high reliability and fault tolerance in the timing task scheduling process are ensured.
Then, step S2 is executed, a node is selected as a designated node to respond to the timing task, the timing task is issued to the message queue, and interception is established between the at least one node selected by the remaining nodes and the message queue. For example, if Node-1 is designated as the designated Node, then Node-2 and Node-3 are backup nodes when Node-1 cannot respond to the timing task, so as to serve one or more of the computing nodes as the new designated Node.
In the process of issuing a timed task to the message queue 100, monitoring is established between all nodes and the message queue 100, and the process scheduling processes 821 to 823 determine whether a new designated node is reselected by the message queue 100 based on a polling mechanism according to time limits set by the task timers 811 to 813, so that the service scheduling processes 921 to 923 included in the new designated node respond to the timed task. In this embodiment, the task scheduler 801 configured in the same Node (e.g., node-1) and the Service scheduler (Service-scheduler) 901 do not form a one-to-one logic relationship, and may respond to a certain timing task together with the Service scheduler (Service-scheduler) 902 in Node-2 or the Service scheduler (Service-scheduler) 903 in Node-3. The timing tasks issued by task scheduler 801 to message queue 100 when Node-1 is down may be responded to by service scheduler 902 (which may of course also be service scheduler 903 configured in Node-3) comprising one or more service scheduling processes 922 in Node-2. By "remaining nodes" is meant other nodes (e.g., node-2, node-3) configured in the server cluster 200 when Node-1 is designated, and the new designated Node is one or more selected from the remaining nodes.
Referring to fig. 2, when a timing task cannot be responded to by a service scheduling process 921 of a designated Node, based on a distributed lock corresponding to the timing task, a flow scheduling process 901 determines whether a new designated Node is reselected from the remaining nodes (i.e., node-2 and Node-3) by a polling mechanism in a message queue 100 according to a time limit set by a task timer 811, so that a service scheduling process (i.e., a service scheduling process 922 and/or a service scheduling process 923) included in the new designated Node responds to the timing task.
When Node-1 issues the timing task, node-2 issues the timing task to message queue 100 as shown by arrow task2, and Node-3 issues the timing task to message queue 100 as shown by arrow task 3. At the same time, a snoop is established between all nodes and the message queue 100. When it is determined that the timed task is executed by the service scheduling process 921 included in the service scheduler 901 in the Node-1, the message queue 100 issues the timed task to the service scheduler 901, returns the result to the message queue 100 after the timed task is executed, and finally notifies the task scheduler 801 of the result through the message queue 100, so that the execution result generated by the service scheduling processes 921 to 923 of the timed task is saved in the storage 201 by the task scheduler 801 to be called or accessed by the user or the administrator. Thus far, a complete timed task scheduling operation is performed.
Specifically, in this embodiment, before the independent flow scheduling processes 821 to 823, the service scheduling processes 921 to 923, and the task timers 811 to 813 are respectively configured in at least two nodes, the method further includes: the response capability to the timing task of all the nodes is detected, and the detection result is stored in the message queue 100. The scheduling method further comprises the following steps: after the service scheduling process in the selected node responds to the timing task, the result corresponding to the response timing task is stored in the storage device 201, and the designated node is notified by the message queue. Preferably, the storage device 201 is selected from a JVM memory, a distributed storage component, or a database, and is most preferably a distributed database, to improve CRUD operation efficiency. Meanwhile, the distributed storage component may also be a block storage or a file storage. When the timing task is in the scenes of searching, inserting, modifying, deleting and the like of a certain object, file storage is preferred; in a scenario where the timing task is to access, download, or the like, a streaming media file such as a video, block storage is preferable.
In this embodiment, the applicant has Node-1 as the designated Node for the responsive timing task. Node-1 configures a pair of flow scheduling process 821 and service scheduling process 921 and decouples flow scheduling process 821 and service scheduling process 921 using message queue 100. The message queue is RabbitMQ. Node-2 and Node-3 are configured with reference Node-1.
Specifically, as described in connection with fig. 8, the rabitmq is implemented according to the distributed characteristics of Erlang (the lower layer of the rabitmq is implemented through an Erlang architecture, so that the rabitmqctl starts an Erlang node, and uses an Erlang system to connect the rabitmq node based on the Erlang node, and in the connection process, a correct Erlang Cookie and a node name are required, and the Erlang node is implemented by exchanging the Erlang Cookie to obtain authentication), so that the Erlang is installed first when the rabitmq distributed cluster is deployed, and a Cookie of one service is copied to another node.
In the rabitmq cluster (i.e., the rabitmq cluster), each rabitmq is a peer node, i.e., each node provides a client connection to receive and send messages. The nodes are divided into a memory node and a disk node, and generally, the nodes are all established as the disk node so as to prevent the message from disappearing after restarting the machine; exchange601 is a key component that accepts producer messages and routes messages to message queue 100. Exchange type and Binding determine the routing rules of the message. So the producer wants to send a message, first it has to declare an Exchange601 and Binding602 corresponding to the Exchange 601. This can be done by exchange declare and bindingdesclare. In Rabbit MQ, an Exchange601 is declared to require three parameters, exchangeName, exchangeType and Duable. The Exchange name is the name of the Exchange and the property needs to be specified when creating Binding and the producer pushes a message through publishing. The Exchange type refers to the type of Exchange, and in RabbitMQ, there are four types of Exchange: different Exchange will exhibit different routing behavior, direct type, fanout type and Topic type. Duable is the persisted attribute of the Exchange 601. Declaring a Binding requires providing a QueueName, exchangeName and Binding key. Different routing rules exhibited by different exchange types are set forth below.
When the producer sends a message, a RoutingKey and an Exchange are required to be specified, and after receiving the RoutingKey, the Exchange type is judged.
a) If the routing key is of the Direct type, the routing key in the message is compared with the Binding keys in all Binding associated with the Exchange, and if the routing keys are equal, the routing keys are sent to a Queue corresponding to the Binding.
b) In the case of the Fanout type, the message is sent to all the queries that have defined Binding with the Exchange, which is a broadcast behavior.
c) If the matching is of the Topic type, matching the routingKey and the BindingKey according to the regular expression, and if the matching is successful, sending the routingKey and the BindingKey to the corresponding Queue.
The RabbitMQ cluster will send messages to each consumer in sequence. Each consumer receives an equal number of messages on average. This way of sending messages is called round-robin (round-robin). Referring to fig. 8, after the timing task is issued to Exchange601, a plurality of Queues including Q1 to Qn are formed based on the binding process, the plurality of Queues including Q1 to Qn form Queues603, and Q1 to Qn are issued one by one to a service scheduling process and executed.
Step S3, the flow scheduling process determines whether to reselect a new designated node by the message queue based on a retry mechanism according to the time limit set by the task timer, and the service scheduling process in the selected node of the message queue 100 responds to the timed task.
In this embodiment, the polling distribution mechanism based on the message queue 100 may provide load balancing capability for task messages corresponding to timed tasks. As shown in connection with fig. 7, solid arrow 5 and dashed arrow 5 represent service dispatch processes in three nodes listening for timed task messages in message queue 100. Because the three nodes distribute the configured service scheduling processes 921, 922 and 923 to monitor the same exchange type timed task message, the timed task can be distributed to the selected service scheduling process 921 in sequence through the load balancing strategy of the message queue 100, and after the service scheduling process 921 finishes executing the timed task, a confirmation receipt is sent to the message queue 100 along the direction shown by the arrow 6 in fig. 7, and after the message queue 100 receives the confirmation receipt, the timed task is confirmed to be executed; if the service dispatch process 921 in Node-1 does not send an acknowledgement to the message queue 100, the message queue 100 issues a timed task that has been sent to the service dispatch process 921 to the service dispatch process 922 in Node-2 after a set period of time (e.g., 0.5 seconds) until the message queue 100 receives an acknowledgement. Because the service scheduling processes 921-923 and the flow scheduling processes 821-823 in the nodes are decoupled through the message queue 100, the uniqueness of the timing task scheduling process can be realized through the message queue 100, and the problems of HA and load balancing in the distributed timing task scheduling process are solved.
In particular, in the cloud platform instance where multiple computing nodes exist, the computing nodes with different scales can be accessed through the message queue 100, so that the logic stability and reliability in the process of capacity expansion or capacity reduction of the computing nodes can be improved, and the service scheduling process in the last suitable node can be matched for the timing task, so that the simplicity of capacity expansion of the computing nodes of the cloud computing platform applying the distributed timing task scheduling method is simplified.
In theory, the scheduling method disclosed in the basic embodiment can respond to the use requirement of the timing task for the cloud platform of two to any plurality of computing nodes. Meanwhile, the scheduling method disclosed by the embodiment does not need to rely on the traditional Quartz+zookeeper framework, so that the technical problem of excessive occupation of resources during the scheduling of the distributed timing tasks is solved, the calculation overhead and calculation pressure of a database (the lower concept of the storage device 201) are reduced, and the deployment difficulty of the database is reduced.
In connection with FIG. 7, applicants more particularly illustrate a complete flow of the scheduling method.
The scheduling method may be further refined into execution steps (i.e., steps 1-7) as shown by arrows 1-7, wherein solid arrows represent actual execution flows and dashed arrows represent alternative execution flows. As shown in fig. 7, it represents an example in which the scheduling process for a certain timing task depends on the flow scheduling process 823 in Node-3, and when Node-3 is down, the timing task is transferred to be executed by the service scheduling process 921 in Node-1 through the message queue 100.
Arrow 1 (step 1): the user designs a timed task flow chart through the page designer and submits it to the program running the scheduling method disclosed in connection with embodiment one. The program analyzes the timing task steps and sends the timing task steps to a flow scheduling Node (e.g. Node-3) to create a timing task, after the timing task is successfully created, the timing task record is written back to the storage device 201, meanwhile, a timing timer is started in the Node-3 where the flow scheduling is located, the timing timer of each timing task is independent and does not affect each other, and only the Node-3 records the timing task.
Arrow 2 (step 2): at this time, if Node-3 goes down, the timing task just created on Node-3 will be lost, and to avoid such a single point failure, node-3 will issue the just created task to message queue 100, at which time other flow scheduling nodes (i.e., node-1 and Node-2) are listening to message queue 100.
Arrow 3 (step 3): when new tasks are issued to the message queue 100, the Node-1 and the Node-2 can immediately monitor and acquire the new timing tasks, and create a timer at the Node where the Node is located as a task timer corresponding to the timing tasks, so that the timing tasks are not lost as long as it is ensured that one Node of the server cluster 200 is not down.
Arrow 4 (step 4): when the timing arrives, all the flow scheduling processes trigger the timing task to create a timing task, but in order to ensure that the same timing task cannot be repeatedly executed, when the timing task is issued, each node tries to create a distributed lock for the same timing task, the distributed lock of the same timing task is only successfully created, so that only the timing task of one node can be successfully issued to the message queue 100, the distributed locks of other nodes do not successfully create the timing task cancel issuing the timing task, wait for the next round of timing task to arrive to trigger the timing task to generate again, and preemptively create the distributed lock for the timing task again, when the timing task is successfully created, the timing task is issued, otherwise, the current node cancels issuing the timing task.
Arrow 5 (step 5): the service scheduling process monitors the timing task in the message queue 100, and when the timing task is issued to the message queue 100 in step 4, the service scheduling process can receive the timing task and execute the timing task. Because the multiple service scheduling processes monitor the same Topic, the message queue 100 adopts a load balancing strategy, such as polling scheduling, to sequentially distribute the timing tasks to the service scheduling processes, and after the service scheduling processes are executed, the service scheduling processes distribute the results to the message queue 100 to request the message queue 100 for a message consumption confirmation receipt, and when the confirmation receipt is received, the message consumption is indicated to be completed; if the Node is down during the execution of the timing task, the message queue 100 will issue a message consumption receipt to another service scheduling process to execute the task after a certain time, so that only one service scheduling process exists, the timing task can be ensured to be consumed by one service scheduling process in the Node 1-Node-3.
Arrow 6 (step 6): the result of the execution of step 5 (i.e., the acknowledgement receipt described above) is distributed to the message queue 100.
Arrow 7 (step 7): the flow scheduling monitors the service execution result in the message queue 100, and when the execution result is distributed to the message queue 100, one flow scheduling process receives the execution result and writes the result back into the storage device 201 for storage; if the timing task has the next associated operation branch, one branch is selected according to the result to continue to issue the timing task to execute the steps 1 to 7 once until the timing task execution result received in the step 7 is finished if the timing task does not have the subsequent associated timing task.
Embodiment two:
the embodiment discloses a first specific implementation manner of a distributed timing task scheduling method. Compared with the scheduling method disclosed in the first embodiment, the main difference of the embodiment is that in the embodiment, one node configures a plurality of pairs of flow scheduling processes and service scheduling processes, and decouples the flow scheduling processes and the service scheduling processes by using message queues; message queue 100 is RabbitMQ.
Referring to fig. 4, a service scheduler 902 and a service scheduler 912 are configured in Node-2, and each is configured with one or more service scheduling processes. Referring to FIG. 5, a task scheduler 802 and a task scheduler 812 are configured in Node-2, and each is configured with one or more flow scheduling processes. As shown in fig. 4 and fig. 5, the service schedulers configured by the same node and the task schedulers are all decoupled by the message queue 100, and when the service scheduling process configured in a certain service scheduler located in the same node cannot send a confirmation receipt to the message queue 100, the matched service scheduling process can be directly selected in the same node in a polling manner, so that the calculation overhead of the message queue 100 and the scheduling pressure on the timing task can be reduced to a certain extent.
Of course, in this embodiment, not all nodes need to configure the task scheduler or all configure the service scheduler. As shown in FIG. 3, in one example, node-1 through Node-3 each configure an independent task scheduler 801-803, while service scheduler 901 is configured only in Node-1. As shown in FIG. 4, in another example, node-1 through Node-3 each have an independent task scheduler 801-803, node-1 has a service scheduler 901, node-2 has a service scheduler 902 and a service scheduler 912, and Node-3 has no service scheduler. As shown in FIG. 5, in another example, node-1 to Node-3 are each configured with independent service schedulers 901 to 903, and Node-1 is configured with task scheduler 801, node-2 is configured with task scheduler 802 and task scheduler 812, node-3 is not configured with task scheduler. The task scheduler and the service scheduler in the foregoing examples are decoupled through the message queue 100, so that the timing task can be adaptively adjusted, so as to better fit the actual service requirement, and have the peak clipping and valley filling effects.
Preferably, in this embodiment, the number of the multi-flow scheduling processes configured in the node or the computing node and the number of the flow scheduling processes included in the service scheduling process is greater than the number of the service scheduling processes, where the service scheduling process is configured as one or more service units for responding to the timing task, and a mapping relationship exists between the service scheduling process and the service units. The service unit is a container, a virtual machine or a micro service. Because the service units temporarily occupy resources/services in the node or compute node, the resources/services required for the service scheduling process corresponding to the response of the timed task are differentiated. Therefore, through the above arrangement, the service scheduling process of the specific response timing task calls the resource more accurately, and the waste of the resource, the storage resource and the network resource in the server cluster 200 or the cloud platform is reduced.
Embodiment III:
referring to fig. 6 and 9, this embodiment discloses a specific implementation of a distributed timing task scheduling system (hereinafter referred to as "scheduling system"). The scheduling system applies the inventive idea of the scheduling method as disclosed in the above-described first and/or second embodiments. In this embodiment, the deployment of the scheduling system in the server cluster 200 is described in detail.
The distributed timing task scheduling system comprises:
task schedulers 801 to 803, task timers 811 to 813 and service schedulers 901 to 903 deployed in nodes, a message queue 100, and a timed task issuing component 500.
The timed task issuing component 500 selects one node as the designated node to issue a timed task to the message queue 100 in response to the timed task, and establishes a snoop between the remaining nodes selected at least one node and the message queue. The task scheduler includes a flow scheduling process that determines whether a new designated node is reselected by the message queue based on a retry mechanism based on a time limit set by the task timer and responds to the timed task by a service scheduler including a service scheduling process in the selected node of the message queue 100.
Of course, the precondition for performing "establish a snoop between the remaining nodes selecting at least one node and the message queue" is that the health of the designated node initially selected by the timed task delivery component 500 and its configured service scheduler 901 containing the service scheduling process 921 is able to respond to the timed task delivered through the message queue 100. The timed task delivery component 500 includes a User Interface (UI) 501 and a Load Balancer (LB) 502. The user interface 501 provides a visual instruction input interface to facilitate a user or administrator entering a create timed task instruction into the scheduling system via the user interface 501 and visually viewing timed task execution results via the user interface 501.
As shown in fig. 2 and fig. 7, node-1 to Node3 are configured in the server cluster 200, wherein Node-1 configures an independent flow scheduling process 821, a service scheduling process 921, and a task Timer 811 (Timer); node-2 configures an independent flow scheduling process 822, a service scheduling process 922, and a task Timer 812 (Timer); node-3 configures an independent flow scheduling process 823, service scheduling process 923, and task Timer 813 (Timer). The flow scheduling process 821 is logically run in the Task scheduler 801 (Task-scheduler), the service scheduling process 921 is logically run in the service scheduler 901, and the Task scheduler 801 in Node-1 may run one or more flow scheduling processes 821, the service scheduler 901 in Node-1 may run one or more service scheduling processes 921, and Node-2 may be configured the same as described in Node-3 with reference to Node-1. It should be noted that, after any one of the three computing nodes Node1 to Node-3 receives the issued timing task, the timing task is issued to the message queue 100 (RabbitMQ cluster), and the other computing nodes also synchronously receive the task message stored in the message queue 100. Meanwhile, a period set by a task timer in each computing Node (for example, when a certain timing task starts to be executed and when the execution must be completed) also corresponds to the timing task, if a certain timing task is not executed within the period set by the task timer, the computing Node configured with the service scheduler is determined to be unable to respond to the timing task, and then the timing task is transferred to a new designated Node such as Node-2 and/or Node-3 through the message queue 100 to complete the response to the timing task. In this embodiment, the "response to a timed task" or "response to a timed task" may be embodied as specific operations such as task viewing, task deletion, viewing of task history execution records, execution of tasks, and modification of task content. The task timer counts the set period, creates a timing task when the counted period is over, and issues the timing task to the message queue 100 as shown by arrows task1 to task3 in fig. 2.
The scheduling system disclosed in this embodiment has the same concept as the scheduling method disclosed in the first embodiment and/or the second embodiment, and is only protected from the device and method, so that the scheduling system disclosed in this embodiment has the same technical scheme as that of the first embodiment and/or the second embodiment, and the description of the first embodiment and/or the second embodiment is omitted herein.
Embodiment four:
the present embodiment discloses a server cluster 200, where the server cluster 200 is composed of one or more computers with independent computing, storage, and communication functions. The server cluster 200 configures at least two nodes. In the present embodiment, the nodes are computing nodes (computers Node), i.e., node-1 to Node-3, and the number is not limited to these three computing nodes. The server cluster 200 operates a distributed timed task scheduling method as disclosed in embodiment one and/or embodiment two.
A node or computing node is a functional logical partition in a cloud platform, which may be further understood as a computer or a computer that includes a computer-readable medium 700 as disclosed in fig. 10, and may be a physical computer or a virtual computer. The computer readable medium 600 includes at least one processor 702 and at least one memory device coupled to the processor 702, the memory device storing and executing the computer program instructions 701 of a distributed timing task scheduling method as disclosed in embodiment one and/or embodiment two. In this embodiment, for the specific technical solutions of the distributed timing task scheduling method, please refer to the first and/or second embodiments, and the detailed description is omitted herein. Meanwhile, the memory device disclosed in the present embodiment is not the same concept as the memory device 201 in the first embodiment, and generally includes the memory device 201. The storage device according to the present embodiment is generally configured as a mass storage medium such as a mechanical hard disk or a disk array for storing data.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated unit may be stored in a computer readable medium if implemented in the form of a software functional unit and sold or used as a stand alone product. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (13)

1. A distributed timing task scheduling method, comprising:
an independent flow scheduling process, a service scheduling process and a task timer are respectively configured in at least two nodes;
selecting one node as a designated node to respond to the timing task, issuing the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue;
the flow scheduling process determines whether a new designated node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and the service scheduling process in the node selected by the message queue responds to the timing task;
monitoring is established between all nodes and the message queue after the timing task is issued to the message queue, and the flow scheduling process determines whether a new designated node is reselected by the message queue based on a polling mechanism according to the time limit set by the task timer so that the service scheduling process contained by the new designated node responds to the timing task; when the timing task can not be responded by the service scheduling process of the designated node, based on the distributed lock corresponding to the timing task, the flow scheduling process is used for determining whether to reselect a new designated node from the rest nodes by a polling mechanism in the message queue according to the time limit set by the task timer, so that the service scheduling process contained in the new designated node responds to the timing task.
2. The method of claim 1, further comprising, before configuring independent flow scheduling processes, service scheduling processes, and task timers in at least two nodes, respectively: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
3. The method of claim 2, wherein the node configures a pair of a flow scheduling process and a service scheduling process, and decouples the flow scheduling process and the service scheduling process from using a message queue;
the message queue is RabbitMQ.
4. The distributed timed task scheduling method according to claim 2, wherein the node configures a plurality of pairs of flow scheduling processes and service scheduling processes and decouples the flow scheduling processes and service scheduling processes from message queues;
the message queue is RabbitMQ.
5. The method according to claim 4, wherein the number of the multi-pairs of flow scheduling processes configured in the node and the number of the flow scheduling processes included in the service scheduling process is greater than the number of the service scheduling processes configured as one or more service units for responding to the timing tasks, and the service scheduling processes and the service units have a mapping relationship;
The service unit is a container, a virtual machine or a micro service;
the distributed timing task scheduling method is used for a multi-timing task scene.
6. The distributed timed task scheduling method according to claim 1, 3, 4 or 5, further comprising:
after the service scheduling process in the selected node responds to the timing task, the result corresponding to the response timing task is stored in the storage device, and the designated node is notified by the message queue.
7. A distributed timed task scheduling system, comprising:
the system comprises a task scheduler, a task timer, a service scheduler, a message queue and a timing task issuing component which are deployed in the node;
the timing task issuing component selects one node as a designated node to respond to the timing task, issues the timing task to the message queue, and establishes monitoring between at least one node selected by the rest nodes and the message queue; the task scheduler comprises a flow scheduling process, the flow scheduling process determines whether a new designated node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and a service scheduler comprising a service scheduling process in the selected node of the message queue responds to the timing task;
Monitoring is established between all nodes and the message queue when the timing task is issued to the message queue, and a task scheduler determines whether a new designated node is reselected by the message queue based on a polling mechanism according to the time limit set by a task timer so that a service scheduler containing a service scheduling process in the node selected by the message queue responds to the timing task; when the timing task cannot be responded by the service scheduler of the designated node, the plurality of nodes respond to the timing task by using a flow scheduling process contained in the task scheduler to determine whether a new designated node is reselected from the rest nodes by a polling mechanism in the message queue according to the time limit set by the task timer based on the distributed lock corresponding to the timing task.
8. The distributed timed task scheduling system of claim 7, wherein,
before the independent flow scheduling process, the service scheduling process and the task timer are respectively configured in at least two nodes, the method further comprises the following steps: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
9. The system of claim 8, wherein the node configured task scheduler and service scheduler comprises a pair of a flow scheduling process and a service scheduling process, and wherein the flow scheduling process and the service scheduling process are decoupled using a message queue;
the message queue is RabbitMQ.
10. The distributed timed task scheduling system according to claim 8, wherein the node configured task scheduler and service scheduler includes multiple pairs of flow scheduling processes and service scheduling processes and decouples the flow scheduling processes and service scheduling processes from message queues;
the message queue is RabbitMQ.
11. The distributed timed task scheduling system according to claim 10, wherein the number of the flow scheduling processes is greater than the number of service scheduling processes configured as one or more service units for responding to timed tasks, the service scheduling processes and the service units having a mapping relationship;
the service unit is a container, a virtual machine or a micro service;
the distributed timed task scheduling system is configured to respond to a multi-timed task scenario.
12. The distributed timed task scheduling system according to claim 7, 9, 10 or 11, further comprising:
the storage device is used for storing a result corresponding to the response timing task into the storage device after the service scheduling process in the selected node responds to the timing task, and the message queue is used for notifying the designated node; the storage device is selected from a JVM memory, a distributed storage component, or a database.
13. A server cluster, characterized in that,
the server cluster is configured with at least two nodes,
the server cluster runs the distributed timed task scheduling method according to any one of claims 1 to 6.
CN202010107644.6A 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster Active CN111338773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010107644.6A CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010107644.6A CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Publications (2)

Publication Number Publication Date
CN111338773A CN111338773A (en) 2020-06-26
CN111338773B true CN111338773B (en) 2023-06-20

Family

ID=71184151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010107644.6A Active CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Country Status (1)

Country Link
CN (1) CN111338773B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784185A (en) * 2020-07-14 2020-10-16 广东电网有限责任公司电力调度控制中心 Distributed power distribution communication network timed task scheduling system
CN111858007A (en) * 2020-07-29 2020-10-30 广州海鹚网络科技有限公司 Task scheduling method and device based on message middleware
CN111913793A (en) * 2020-07-31 2020-11-10 同盾控股有限公司 Distributed task scheduling method, device, node equipment and system
CN111970148A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Distributed task scheduling method and system
CN112131179B (en) * 2020-09-23 2023-11-21 平安科技(深圳)有限公司 Task state detection method, device, computer equipment and storage medium
CN111930492B (en) * 2020-10-12 2021-01-19 南京赛宁信息技术有限公司 Task flow scheduling method and system based on decoupling task data model
CN112445595B (en) * 2020-11-26 2022-10-25 深圳晶泰科技有限公司 Multitask submission system based on slurm computing platform
CN112910952B (en) * 2021-01-13 2022-08-23 叮当快药科技集团有限公司 Distributed task scheduling method and device, storage medium and electronic device
CN113342499B (en) * 2021-06-29 2024-04-30 中国农业银行股份有限公司 Distributed task calling method, device, equipment, storage medium and program product
CN114063936B (en) * 2022-01-18 2022-03-22 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for optimizing timing task
CN117370457A (en) * 2023-09-26 2024-01-09 浪潮智慧科技有限公司 Multithreading data real-time synchronization method, equipment and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741040B2 (en) * 2013-08-30 2017-08-22 Sap Se High-load business process scalability

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
屈志坚 ; 王群峰 ; 王汉林 ; .调度中心流计算集群排队模型的CQB并行均衡控制方法.中国电机工程学报.2018,(19),全文. *

Also Published As

Publication number Publication date
CN111338773A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111338773B (en) Distributed timing task scheduling method, scheduling system and server cluster
CN111338774B (en) Distributed timing task scheduling system and computing device
US11677860B2 (en) Decentralization processing method, communication proxy, host, and storage medium
US10846140B2 (en) Off-site backup of workloads for multi-tenant cloud computing system
CN107733726B (en) Service request processing method and device
JP6669682B2 (en) Cloud server scheduling method and apparatus
US11157324B2 (en) Partitioning for delayed queues in a distributed network
US8938510B2 (en) On-demand mailbox synchronization and migration system
EP2710461B1 (en) Cross-cloud computing for capacity management and disaster recovery
CN105468450A (en) Task scheduling method and system
US8447757B1 (en) Latency reduction techniques for partitioned processing
US9104488B2 (en) Support server for redirecting task results to a wake-up server
CN105208047A (en) Online and off line methods of distributed system server, and server
CN106603692B (en) Data storage method and device in distributed storage system
WO2017185615A1 (en) Method for determining service status of service processing device and scheduling device
CN111427670A (en) Task scheduling method and system
US20220232073A1 (en) Multichannel virtual internet protocol address affinity
CN105681426B (en) Heterogeneous system
CN113206877A (en) Session keeping method and device
EP4369181A1 (en) Node for running container group, and management system and method of container group
US20180150246A1 (en) Managing of Storage Resources
CN113014608B (en) Flow distribution control method and device, electronic equipment and storage medium
CN105373563B (en) Database switching method and device
CN107426012B (en) Fault recovery method and device based on super-fusion architecture
CN115640100B (en) Virtual machine information synchronization method and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant