CN117056056A - Task execution method and device, storage medium and electronic equipment - Google Patents

Task execution method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117056056A
CN117056056A CN202311307561.1A CN202311307561A CN117056056A CN 117056056 A CN117056056 A CN 117056056A CN 202311307561 A CN202311307561 A CN 202311307561A CN 117056056 A CN117056056 A CN 117056056A
Authority
CN
China
Prior art keywords
task
target
plug
service node
target task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311307561.1A
Other languages
Chinese (zh)
Inventor
赵楚涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311307561.1A priority Critical patent/CN117056056A/en
Publication of CN117056056A publication Critical patent/CN117056056A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0721Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a task execution method and device, a storage medium and electronic equipment. Wherein the method comprises the following steps: acquiring a target task to be executed, which is sent by a client, on a server; calling a first plug-in process to execute a target task through a first service node on a server; under the condition that the first plug-in process is abnormal, acquiring an execution process parameter of a target task from a target storage system through a first service node, and calling a second plug-in process through the first service node to continue to execute the target task according to the execution process parameter. The invention solves the technical problem of lower task execution efficiency.

Description

Task execution method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of computers, and in particular, to a task execution method and apparatus, a storage medium, and an electronic device.
Background
In the related art, in the case that a client initiates a task, the task is executed in the client, for example, resources are managed and tasks are scheduled by a Mesos Master (Master node) component in Apache Mesos, and a Mesos Agent (proxy) is responsible for executing the task on a local node; or by the kuubes-scheduler component in Kubernetes assigning tasks to appropriate nodes, kuubelet (Kubernetes-life-EventTrigger, also known as node proxy) is responsible for executing tasks on local nodes, etc., in such a way that in case of task execution failure, execution of tasks needs to be restarted, it will be appreciated that in case of task execution completion, if a system crash results in task execution failure, in case of task execution failure, execution of tasks needs to be restarted, resulting in lower execution efficiency of tasks.
Aiming at the problem of low task execution efficiency, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a task execution method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of low task execution efficiency.
According to an aspect of the embodiment of the present invention, there is provided a task execution method including: acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client; invoking a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type; under the condition that the first plug-in process is abnormal, acquiring an execution process parameter of the target task from a target storage system through the first service node, and calling a second plug-in process through the first service node to continuously execute the target task according to the execution process parameter, wherein the execution process parameter is stored in the target storage system in the process of executing the target task by the first plug-in process, the first plug-in process is set to store the update time of the execution process parameter of the target task in the process of executing the target task into a target memory, and the second plug-in process is used for executing the target type task.
According to another aspect of the embodiment of the present invention, there is also provided a task execution device, including: the system comprises an acquisition unit, a target task processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client; the first execution unit is used for calling a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type; the second execution unit is configured to obtain, when an exception occurs in the first add-on process, an execution process parameter of the target task from a target storage system through the first service node, and call, through the first service node, a second add-on process to continue to execute the target task according to the execution process parameter, where in a process of executing the target task by the first add-on process, the execution process parameter is stored in the target storage system, and the first add-on process is configured to store, in a target memory, an update time of the execution process parameter of the target task in a process of executing the target task, and the second add-on process is used to execute the task of the target type.
In one exemplary embodiment, the apparatus further comprises: a first detection unit configured to detect whether N consecutive first cycles do not store the execution process parameters of the target task in the target storage system in the process of executing the target task, where N is a positive integer greater than or equal to 2, in a case where the first plug-in process is set to store the execution process parameters of the target task in the target storage system in every preset first cycle; determining that an abnormality occurs in the first plug-in process in the case that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the continuous N first periods; or a second detecting unit, configured to detect whether M times of failing to store the execution process parameters of the target task in the process of executing the target task in the target storage system occur within a preset duration, where M is a positive integer greater than or equal to 1, in a case where the first plug-in process is set to store the execution process parameters of the target task in the target storage system in the process of executing the target task every preset first period; and determining that the first plug-in process is abnormal under the condition that M times of occurrence of the condition that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the preset time period.
In one exemplary embodiment, the first detection unit includes: the detection module is used for detecting whether the update time of the execution process parameters stored in the target memory changes or not through the continuous N first periods of the first service node under the condition that the first service node and the first plug-in process are set to share the same target memory; and the determining module is used for determining that the continuous N first periods do not store the execution process parameters of the target task in the process of executing the target task to the target storage system when the continuous N first periods detect that the update time of the execution process parameters stored in the target memory is unchanged through the first service node.
In an exemplary embodiment, the second execution unit includes: a first execution module, configured to, when a set of execution process parameters is stored in the target storage system, obtain, by the first service node, the set of execution process parameters from the target storage system, and call, by the first service node, the second plug-in process to continue to execute the target task according to the set of execution process parameters, where the set of execution process parameters are the execution process parameters at a plurality of moments in the process of executing the target task; or the second execution module is configured to, when the set of execution process parameters is stored in the target storage system, obtain, by using the first service node, the execution process parameter at a last time in the set of execution process parameters from the target storage system, and call, by using the first service node, the second plug-in process to continue to execute the target task according to the execution process parameter at the last time.
In an exemplary embodiment, the second execution unit includes: and the third execution module is used for ending the first plug-in process through the first service node, creating the second plug-in process, and calling the second plug-in process to continue to execute the target task according to the execution process parameters.
In an exemplary embodiment, the second execution unit includes: and the fourth execution module is used for acquiring the execution process parameters of the target task from the target storage system through the second service node on the server and calling a third plug-in process through the second service node to continue executing the target task according to the execution process parameters under the condition that the first service node is set to be in heartbeat connection with the service node scheduler on the server and the heartbeat connection of the service node scheduler and the first service node is abnormal.
In one exemplary embodiment, the apparatus further comprises: a third detecting unit, configured to detect whether P consecutive second periods do not receive a heartbeat signal sent by the first service node, where P is a positive integer greater than or equal to 2, when the first service node is set to send a heartbeat signal to the service node scheduler every a preset second period; and under the condition that the heartbeat signals sent by the first service node are not received in the continuous P second periods, determining that the heartbeat connection between the service node scheduler and the first service node is abnormal.
In an exemplary embodiment, the fourth execution module is configured to: deleting the first service node through the service node dispatcher, creating the second service node, acquiring the execution process parameters of the target task from the target storage system through the second service node, and calling the third plug-in process through the second service node to continue to execute the target task according to the execution process parameters.
In one exemplary embodiment, the apparatus further comprises: the transmission unit is used for transmitting the target task to a plug-in scheduling service on the server before the target task is executed by calling a first plug-in process through a first service node on the server; the allocation unit is used for allocating the target task to the first plug-in process in a plug-in process set through the plug-in scheduling service, wherein each plug-in process in the plug-in process set is used for executing one type of task in a task type set, and the task type set comprises the target type.
In one exemplary embodiment, before the target task is executed by the first service node on the server invoking a first plug-in process, the apparatus further comprises: the creation unit is used for creating a target task identifier for the target task; the determining unit is used for determining whether a task identified as the target task identification is included in a task set which is not executed on the server; the first execution unit includes: a fifth execution module, configured to invoke, by a first service node on the server, a first add-in process to execute the target task when a task identified as the target task identifier is not included in a task set that has not yet been executed on the server; the apparatus further comprises: and the filtering unit is used for filtering the target task under the condition that the task set which is not executed on the server comprises the task marked as the target task.
In one exemplary embodiment, the apparatus further comprises: and the sending unit is used for sending the execution result of the target task to the client under the condition that the target task is executed.
In one exemplary embodiment, the acquisition unit includes: the acquisition module is used for acquiring the target task to be executed, which is sent by the client, on the server, wherein the target task is used for executing the target operation on the target event currently performed on the client.
According to a further aspect of embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the task execution method described above when run.
According to yet another aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the task performing method as above.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to execute the task execution method described above by the computer program.
In the embodiment of the application, in the process of calling the plug-in process to execute the task through the service node of the server, the execution process parameters of the task are stored in the storage system, under the condition that the plug-in process originally executing the task is abnormal, the execution process parameters of the task can be acquired from the storage system, and the new plug-in process is called by the service node on the server to continue to execute the task according to the execution process parameters.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is an application scenario diagram of an alternative task execution method according to an embodiment of the present application;
FIG. 2 is an application scenario diagram II of an alternative task execution method according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative task execution method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative determination of an exception to a first plug-in process in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative method for detecting whether an update time of an execution process parameter has changed, according to an embodiment of the application;
FIG. 6 is a schematic diagram of an alternative set of execution parameters according to an embodiment of the application;
FIG. 7 is a schematic diagram of an alternative process for creating a second plugin according to embodiments of the present application;
FIG. 8 is a schematic diagram of maintaining a heartbeat connection between a service node scheduler and a service node in accordance with an embodiment of the present application;
FIG. 9 is a schematic diagram of determining whether an anomaly in a heartbeat connection of a service node scheduler with a first service node in accordance with an embodiment of the present application;
FIG. 10 is a schematic diagram of an alternative creation of a second service node according to an embodiment of the application;
FIG. 11 is a schematic diagram of an alternative assignment of tasks to appropriate plug-in processes in accordance with an embodiment of the application;
FIG. 12 is a schematic diagram of an alternative calling plug-in process performing tasks according to an embodiment of the application;
FIG. 13 is a schematic diagram of an alternative return of execution results to a client, according to an embodiment of the application;
FIG. 14 is an application scenario diagram III of a task execution method according to an embodiment of the present application;
FIG. 15 is an application scenario diagram IV of a task execution method according to an embodiment of the present application;
FIG. 16 is a schematic diagram of an architecture of an alternative task execution system in accordance with an embodiment of the present application;
FIG. 17 is a schematic diagram of an alternative task execution system architecture second according to an embodiment of the present application;
FIG. 18 is a schematic diagram of interaction between a client and a cloud according to an embodiment of the application;
FIG. 19 is a schematic diagram I of an alternative task execution method according to an embodiment of the application;
FIG. 20 is a schematic diagram II of an alternative task execution method according to an embodiment of the present application;
FIG. 21 is a schematic diagram of an alternative task performing device according to an embodiment of the present application;
FIG. 22 is a schematic diagram of an alternative electronic device in accordance with an embodiment of the application;
FIG. 23 is a block diagram of a computer system of an alternative electronic device in accordance with an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, technical terms involved in the embodiments of the present application are explained as follows:
apache meso, which is a distributed task scheduling and resource management system. The core components of the Mesos include a Mesos Master and a Mesos Agent, etc. The Mesos Master is responsible for managing resources and scheduling tasks and the Mesos Agent is responsible for executing tasks on the local nodes. Meso also supports integration with other schedulers (e.g., marathon, chronos, etc.) to facilitate management of different types of tasks.
Kubernetes, which is an open-source container orchestration system, is used to automate application deployment, expansion, and management. The core components of Kubernetes include Kubernetes API (Application Programming Interface ) Server (gateway), etcd (Distributed reliable key-value store), kube-controller-manager (management control center), kube-schedule, and kubelet. Kubernetes API Server is responsible for handling user requests and managing cluster state, etcd is a distributed key value store for holding cluster configuration data. kube-controller-manager is responsible for managing the various controllers, kube-scheduler is responsible for assigning tasks to the appropriate nodes, and kubelet is responsible for executing tasks on the local nodes.
Manager: the media service cluster center scheduler, or service node scheduler.
Worker: a media service node, or called service node.
Zookeeper cluster (abbreviated as ZK): a distributed coordination service, or plug-in process scheduler.
Redis: remote Dictionary Server, remote dictionary service.
According to an aspect of the embodiment of the present invention, there is provided a task execution method, optionally, as an alternative implementation, the task execution method may be applied, but not limited to, in the environment shown in fig. 1.
As shown in fig. 1, a client 102 is deployed on a terminal device, where the client 102 may be, but is not limited to, a client including an online conference application, a game application, or a live application, etc., and in a case where a user participates in an online conference through the client 102, an online conference interface may be, but is not limited to, displayed on the terminal device, and in the online conference interface, other accounts participating in the online conference may be, but is not limited to, displayed, for example, account a and account B, etc.; the online conference interface may be, but not limited to, a conference content of the online conference may be displayed, and the online conference interface may be, but not limited to, a toolbar may be displayed, and the toolbar may be, but not limited to, buttons for muting, recording, sharing, setting, ending the conference, and the like, and in a case where a click operation is performed on the recording button, the conference interface may indicate that it is desired to record the online conference, and in such a case, the client 102 may be, but not limited to, send a recording task to the server 104, where the recording task is used to perform a recording operation on the online conference interface on the client 102.
In such a case, as shown in fig. 2, service nodes 1 to 3 may be but not limited to being deployed on the server 104, and corresponding plug-in processes may be but not limited to being deployed on each service node, for example, plug-in processes 1 to 4 may be but not limited to being deployed on the service node 2, where plug-in process 1 is used to perform tasks of type 1, plug-in process 2 is used to perform tasks of type 2, plug-in process 3 is used to perform tasks of type 3, and plug-in process 4 is used to perform tasks of type 2. In such a case, performing the recording task by server 104 may include, but is not limited to, the steps of:
in step S201, a recording task to be executed sent by the client 102 is obtained on the server 104, where the recording task is used to execute a recording operation on an online conference interface on the client 102.
In step S202, the plug-in process 4 is invoked by the service node 2 on the server 104 to execute the recording task, where the type of the recording task is type 2.
In step S203, in the event of an exception in the plug-in process 4, the service node 2 obtains the execution process parameter of the recording task from the target storage system, and the plug-in process 2 is called by the service node 2 to continue to execute the recording task according to the execution process parameter, where in the process of executing the recording task by the plug-in process 4, the execution process parameter is stored in the target storage system, and the plug-in process 4 is configured to store the update time of the execution process parameter of the recording task in the process of executing the recording task in the target memory.
For example, the target storage system may, but is not limited to, store execution parameters of the respective tasks in the process of executing the tasks by invoking the respective plug-in process deployed on each of the plurality of service nodes on the server 104, e.g., the target storage system may, but is not limited to, store execution parameters of the task 1 executed by invoking the plug-in process 1 by the service node 2 on the server 104, and execution parameters of the record task executed by invoking the plug-in process 4 by the service node 2 on the server 104, etc.
The target memory may store, but is not limited to, an update time of an execution process parameter of a task during execution of the task, for example, the target memory may store, but is not limited to, update times t1, t2, and t4 of an execution process parameter of a task 1, and the like, and an update time t5 of an execution process parameter of a recording task, and the like.
By the method, on one hand, the recording task is prevented from being executed on the client, but the recording task is executed through the server, so that the execution success probability of the recording task is improved, and on the other hand, under the condition that the plug-in process is abnormal, the execution process parameters of the recording task can be timely recovered from the target storage system, so that the recording task can be continuously executed according to the execution process parameters by calling a new plug-in process, the recording task is prevented from being restarted, and the execution efficiency of the recording task is improved.
Alternatively, in the present embodiment, the above-mentioned terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: a mobile phone (e.g., an Android mobile phone, iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, a MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, a smart television, etc. The target client may be a video client, an instant messaging client, a browser client, an educational client, and the like. The network may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and is not limited in any way in the present embodiment.
Optionally, as an optional embodiment, as shown in fig. 3, the task performing method includes:
in step S302, the client may include, but is not limited to, a client including an online conference application, a game application or a live application, or a client of an instant messaging application, etc., for example, the game application may include, but is not limited to, a MOBA (Multiplayer Online Battle Arena, multiplayer online tactical game) application, or a shooting-class game application, etc.
The target tasks to be performed sent by the client may include, but are not limited to, multimedia tasks and general tasks involving processing of multimedia data such as audio, images, video, and the like. Multimedia tasks often require real-time processing of large amounts of data to meet the interactive needs of users. The multimedia tasks may include, but are not limited to, audio codec, image processing, video playback and editing, and the like. Multimedia tasks typically require high bandwidth, low latency, and real-time. General tasks may include, but are not limited to, general tasks performed in a computer system, including various data types and application scenarios. Examples of general tasks may include, but are not limited to, text processing, network communications, file manipulation, and the like. The performance requirements of general tasks vary from application to application, some tasks have high real-time requirements, while some tasks can tolerate longer delays.
For example, in the case where the client includes a client of an online conference application, the target task to be performed that is sent by the client may, but is not limited to, performing a recording operation on an online conference interface of an online conference currently performed on the client, or performing a transcription operation on conference content displayed in the online conference interface of the online conference currently performed, or performing a live operation on the online conference currently performed, or the like.
For another example, in the case where the client includes a client of a game application, the target task to be performed transmitted by the client may be, but is not limited to, a live operation on a game screen of an ongoing game displayed on the client, or a recording operation on a game screen of an ongoing game displayed on the client, or a tactical analysis operation on a game screen of an ongoing game displayed on the client, to analyze a tactical used by a player in a current game of the game, or the like.
In step S304, the server may be configured with one or more service nodes, where the service nodes may be configured to provide a running environment required by the plug-in process for invoking the task performed by the plug-in process and schedule and control each plug-in process (e.g., end the plug-in process and create the plug-in process, etc.), for example, determine whether the plug-in process crashes (or is called, whether the plug-in process is abnormal), and determine that a new plug-in process is pulled after the plug-in process crashes.
Each service node may be, but is not limited to being, deployed with a corresponding plug-in process, and it is to be understood that each service node may be, but is not limited to being, deployed with one or more plug-in processes, each service node may be, but is not limited to being, comprised of the same, or different, or partially the same, etc.
The plug-in processes may be, but are not limited to being, of the type having corresponding tasks that are allowed to be performed, and the types of tasks corresponding to the different plug-in processes may be, but are not limited to being, identical, or different, or partially identical, etc. A plug-in process may, but is not limited to, allow one or more corresponding types of tasks to be performed simultaneously. For example, in the case where the client includes a client of an online conferencing application, the task type of the target task to be performed sent by the client may include, but is not limited to, types including translation, live, recording, and so forth. For another example, where the client comprises a client of a gaming application, the task type of the target task to be performed sent by the client may include, but is not limited to, types including statistics, live and recorded, and the like.
In order to improve the execution efficiency of tasks sent by the client, the tasks may be executed by, but not limited to, invoking a plug-in process by a service node on the server, the plug-in process may be, but not limited to, including a process for running and implementing plug-in functions, and the plug-in may be, but not limited to, including a set of programs for implementing target operations for executing target events on the client, it being understood that some business functions, such as recording, transcription, translation, etc., that should be executed in the client are migrated to the cloud as plug-ins. For example, for a recording task, a plug-in process may include, but is not limited to, a process that includes running and implementing a set of programs that record an interface to an online meeting on a client.
By the method, barriers between the client side and the server side are opened, tasks are prevented from being directly executed in the client side, tasks to be executed sent by the client side are executed through the server, SDK (Software Development Kit ) business sent by the client side is written in a service node of the server, performance pressure of the client side is reduced, and stability and reliability of a system are greatly improved.
Step S306 may, but is not limited to, storing execution process parameters of each task in a target storage system, which may, but is not limited to, include a set of storage devices, e.g., distributed Redis, during the process of invoking each plug-in process to execute the task by each service node deployed on the server. For example, in invoking a first plug-in process on a server to execute a target task via a first service node, execution process parameters for the task are stored in a target storage system, which may include, but are not limited to, the process volume of stacks, registers, and file descriptors during execution of a set of programs for executing the target task.
For example, a Stack (Stack) is a memory region used to store function calls and local variables. The state of the backup stack may restore the context of the function call and the values of the local variables. A Heap (Heap) is an area for dynamically allocating memory. If the program allocates memory on the heap, the state of the backup heap may restore these dynamically allocated memory blocks. Global Variables (Global Variables) are Variables that are accessible throughout the execution of a program. The state of the backup global variables may restore the values of these variables. Registers (Registers) are memory internal to the CPU (Central Processing Unit ) for storing temporary data and the context of instruction execution. The state of the backup register may restore the execution state of the CPU. File Descriptors (File Descriptors) are identifiers used to access files and other I/O (Input/Output) devices. The state of the backup file descriptor may restore access to the file and other I/O devices.
The target memory may, but is not limited to, store a timestamp of an update time of an execution process parameter of each task and an identifier of the plug-in process in a process of calling each plug-in process to execute the task, for example, a pid (Process Identifier ) of the plug-in process, determine whether the plug-in process is abnormal according to the timestamp of the update time of each execution process parameter stored in the target memory, and determine the plug-in process with the abnormality according to the identifier of the plug-in process with the abnormality.
In the case that the first plug-in process is not abnormal, the first plug-in process can be called by the first service node to execute the target task, in the case that the first plug-in process is abnormal, the first plug-in process can be indicated to crash (or abnormal) and possibly cause the execution failure of the target task, in such a case, in order to promote the probability of successful execution of the task, the execution process parameters of the target task can be obtained from the target storage system by the first service node, for example, the execution process parameters of the target task are obtained from the target storage system by the identification of the plug-in process executing the target task, wherein the execution process parameters of the plug-in process and the task with a corresponding relation are stored in the target storage system, or the execution process parameters of the target task are obtained from the target storage system by the target task identification of the target task, and the execution process parameters of the task with a corresponding relation are stored in the target storage system; and invoking a second plug-in process through the first service node to continue to execute the target task according to the execution process parameters. The first plug-in process and the second plug-in process may be, but are not limited to being, deployed in the same service node or different service nodes, and the second plug-in process and the first plug-in process may be, but are not limited to being, used for the same type of task.
By the method, the purpose that the target task is continuously executed on the existing execution progress of the task by calling the second plug-in process under the condition that the first plug-in process crashes is achieved, the delay of task execution is reduced, and the real-time performance of task execution is improved.
By the method, in the process of calling the plug-in process to execute the task through the service node of the server, the execution process parameters of the task are stored in the storage system, under the condition that the plug-in process which originally executes the task is abnormal, the execution process parameters of the task can be acquired from the storage system, and the new plug-in process is called by the service node on the server to continue to execute the task according to the execution process parameters.
As an alternative, the method further includes determining that the first plugin process is abnormal by one of the following ways:
S11, detecting whether N continuous first periods do not store the execution process parameters of the target task in the target storage system in the process of executing the target task under the condition that the first plug-in process is set to store the execution process parameters of the target task in the target storage system in every preset first period, wherein N is a positive integer greater than or equal to 2; determining that an abnormality occurs in the first plug-in process in the case that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the continuous N first periods;
s12, detecting whether M times of non-storing the execution process parameters of the target task in the process of executing the target task are generated in a preset duration or not in the target storage system under the condition that the first plug-in process is set to store the execution process parameters of the target task in the target storage system in the process of executing the target task every preset first period, wherein M is a positive integer greater than or equal to 1; and determining that the first plug-in process is abnormal under the condition that M times of occurrence of the condition that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the preset time period.
In order to ensure the real-time performance of the execution process parameters of the tasks stored in the target storage system, the execution process parameters of the tasks stored in the target storage system may be, but not limited to, dynamically updated, for example, at time T1, the execution process parameters 1 of the target tasks stored in the target storage system are stored in the target storage system, at time T2, the execution process parameters 1 of the target tasks stored in the target storage system are updated to be the execution process parameters 2, wherein time T1 is earlier than time T2, the duration between time T1 and time T2 is a preset first period, and the preset first period may be, but not limited to, 1 minute or 5 minutes or half hour, etc., which is not limited by the present application.
Optionally, in this embodiment, in a case where the first plug-in process is set to store, in the target storage system, execution process parameters of the target task in a process of executing the target task every preset first period, detecting whether Q first periods are consecutive to store, in the target storage system, the execution process parameters of the target task in a process of executing the target task, where Q is a positive integer greater than or equal to 2; in the case that the execution process parameters of the target task in the process of executing the target task are stored in the target storage system in Q consecutive first cycles, it is determined that no abnormality occurs in the first plug-in process.
Or, in the case that the first plug-in process is set to store the execution process parameters of the target task in the process of executing the target task in the target storage system at intervals of a preset first period, detecting whether the execution process parameters of the target task in the process of executing the target task are stored in the target storage system for S times within a preset duration, wherein S is a positive integer greater than or equal to 1; and under the condition that S times of storing the execution process parameters of the target task in the process of executing the target task to the target storage system occur within the preset time period, determining that the first plug-in process is not abnormal.
To improve the accuracy of determining that the first plug-in process is abnormal, it may be, but is not limited to, detecting whether the first plug-in process does not store the execution process parameters of the target task in the process of executing the target task in the target storage system for N consecutive first periods. As shown in fig. 4, an example may be taken, but not limited to, n=3, m=2, where the target storage system stores the execution process parameter a of the recording task executed by calling the plug-in process 4 on the server, and it may be, but not limited to, detecting whether the execution process parameter of the recording task during the execution of the recording task is not stored in the target storage system for 3 consecutive first cycles, where it is understood that the execution process parameter a of the recording task stored in the target storage system may be, but not limited to, not updated for 3 consecutive first cycles, and in this case, determining that the first plug-in process is abnormal; or detecting that the recording task execution process parameters are not stored in the target storage system in the recording task execution process for 2 times within the preset time, and determining that the first plug-in process is abnormal in the case.
By the method, whether the plug-in process for executing the task crashes or not can be determined by detecting whether the execution process parameters of the task stored in the target storage system are updated in time, so that the instantaneity of determining whether the plug-in process crashes or not is improved.
As an alternative, it may be, but is not limited to, detecting whether N consecutive first periods have not stored the execution process parameters of the target task in the course of executing the target task to the target storage system by:
s21, under the condition that the first service node and the first plug-in process are set to share the same target memory, detecting whether the update time of the execution process parameter stored in the target memory changes or not through the continuous N first periods of the first service node;
s22, when the first service node continuously detects that the update time of the execution process parameters stored in the target memory is unchanged in N first periods, determining that the execution process parameters of the target task in the process of executing the target task are not stored in the target storage system in N continuous periods.
Optionally, in this embodiment, the first service node and the first add-in process may be, but not limited to, configured to allow access to an update time of an execution process parameter of the target task stored in the target memory, and the first add-in process may also be, but not limited to, configured to allow real-time update of an update time of an execution process parameter of the target task stored in the target memory, where the target memory may be, but not limited to, disposed on a server where the first service node is located.
Optionally, in this embodiment, it may also, but not limited to, detect whether M times of failing to store the execution process parameters of the target task in the process of executing the target task to the target storage system occur within the preset time period by: under the condition that the first service node detects that the update time of the execution process parameters stored in the M times of target memories does not change in the preset time period, the fact that the execution process parameters of the target tasks in the process of executing the target tasks are not stored in the target storage system for M times in the preset time period is determined.
As shown in fig. 5, for example, but not limited to, the first period is equal to 5 minutes, and the update time t5 of the execution process parameter of the recording task is stored in the target memory. In the process of calling the plug-in process 4 to execute the recording task through the service node 2 on the server 104, the update time of the execution process parameter of the recording task is stored in the target memory.
Under the condition that the service node 2 and the plug-in process 4 are set to share the same target memory, detecting whether the update time of the execution process parameters of the recording tasks stored in the target memory is changed or not through 3 continuous first periods of the service node 2; in the case that the update time of the execution process parameter of the recording task stored in the target memory is detected by the service node 2 in the consecutive 3 first periods, for example, in the case that the update time of the execution process parameter of the recording task stored in the target memory is detected by the service node 2 in the consecutive 3 first periods, it is determined that the execution process parameter of the recording task in the process of executing the recording task is not stored in the target storage system in the consecutive 3 first periods.
By the method, whether the update time of the execution process parameters of the task changes or not is detected, whether the execution process parameters of the task are updated to the target storage system or not in time is detected, and timeliness of updating the execution process parameters of the detection task is improved.
As an alternative, the method may, but is not limited to, obtain, by the first service node, an execution process parameter of the target task from the target storage system, and call, by the first service node, the second plug-in process to continue to execute the target task according to the execution process parameter:
S31, under the condition that a group of execution process parameters are stored in the target storage system, acquiring the group of execution process parameters from the target storage system through the first service node, and calling the second plug-in process to continue to execute the target task according to the group of execution process parameters through the first service node, wherein the group of execution process parameters are the execution process parameters at a plurality of moments in the process of executing the target task;
s32, under the condition that the group of execution process parameters are stored in the target storage system, acquiring the execution process parameters at the last moment in the group of execution process parameters from the target storage system through the first service node, and calling the second plug-in process through the first service node to continue to execute the target task according to the execution process parameters at the last moment.
In order to improve the efficiency of task execution and improve the experience of the user, the task can be continuously executed on the existing execution progress of the task by calling the second plug-in process through the first service node, a set of execution parameters can be but not limited to execution process parameters at a plurality of moments in the process of executing the target task, optionally, the plurality of moments can be but not limited to a plurality of moments in time within a target duration from the current detection moment, wherein the current detection moment can be but not limited to a moment when an abnormality occurs to the first plug-in process. It will be appreciated that the values of the execution process parameters of the original plug-in process stored in the target storage system at a plurality of times may be assigned to the new plug-in process, so that the new plug-in process may continue to execute the task on the task's existing execution schedule.
As shown in fig. 6, the plug-in process 4 may be called by the service node 2 on the server 104 to execute the recording task, in the case that the plug-in process 4 is abnormal, a set of execution process parameters of the recording task are obtained from the target storage system by the service node 2, and the plug-in process 2 is called by the service node 2 to continue to execute the recording task according to the set of execution process parameters, where the set of execution process parameters may include, but is not limited to, an execution process parameter of the recording task corresponding to the time t1, an execution process parameter of the recording task corresponding to the time t2, and an execution process parameter of the recording task corresponding to the time t5, where t1 is earlier than t2 and t2 is earlier than t5. By the method, the execution process parameters of the tasks at a plurality of times which are relatively close at present are combined, the tasks are continuously executed, and the comprehensiveness of the execution process parameters required by the task continuously executed is improved.
Or, but not limited to, according to the execution process parameter at the last time in the set of execution process parameters, invoking the second plug-in process by the first service node to continue to execute the target task according to the execution process parameter at the last time; it will be appreciated that the method can be used for assigning the last time value of the execution process parameters at a plurality of times of the original plug-in process stored in the target storage system to the new plug-in process, and in this way, the task can be continuously executed on the latest execution progress of the task by calling the new plug-in process.
As an alternative, the target task may be continuously executed according to the execution process parameter by, but not limited to, invoking the second plug-in process through the first service node by:
s41, ending the first plug-in process through the first service node, creating the second plug-in process, and calling the second plug-in process to continue to execute the target task according to the execution process parameters.
In the case of an exception in the first add-in process, the server may, but is not limited to, end the first add-in process through the first service node, and it may be understood that the first add-in process ends executing the target task, creates a second add-in process, and invokes the second add-in process to continue executing the target task according to the execution process parameters.
As shown in fig. 7, service nodes 1 to 3 may be but not limited to being deployed on the server 104, and corresponding plug-in processes may be but not limited to being deployed on each service node, for example, plug-in processes 1 to 3 may be but not limited to being deployed on the service node 2, where the plug-in process 1 is used to perform tasks of type 1, the plug-in process 3 is used to perform tasks of type 3, and the plug-in process 4 is used to perform tasks of type 2. In such a case, the plug-in process 4 may be invoked by, but is not limited to, the service node 2 on the server 104 to perform a recording task, wherein the type of recording task is type 2.
Under the condition that the plug-in process 4 is abnormal, acquiring the execution process parameters of the recording task from the target storage system through the service node 2, ending the plug-in process 4 through the service node 2, creating the plug-in process 2, and calling the plug-in process 2 to continue to execute the recording task according to the execution process parameters of the recording task.
By the method, under the condition that the plug-in process for executing the task is abnormal, a new plug-in process is created, so that the abnormal plug-in process is timely switched to the new plug-in process, and timeliness of switching the plug-in process for executing the task is improved.
As an alternative, the second plug-in process may be invoked by the first service node to continue to perform the target task according to the execution process parameters by, but not limited to:
s51, under the condition that the first service node is set to keep heartbeat connection with a service node scheduler on the server, and the heartbeat connection between the service node scheduler and the first service node is abnormal, acquiring an execution process parameter of the target task from the target storage system through a second service node on the server, and calling a third plug-in process through the second service node to continuously execute the target task according to the execution process parameter.
The service node scheduler may be, but is not limited to, configured to schedule and control tasks executed by plug-in processes of respective service nodes, and in order to promote real-time performance of whether an abnormality occurs in the service node, the service node scheduler on the server may be, but is not limited to, configured to maintain a heartbeat connection with respective service nodes on the server, and in a case that no abnormality occurs in a heartbeat connection between the service node scheduler and the first service node, maintain that the first plug-in process is invoked by the first service node to execute a target task.
In the case that the heartbeat connection between the service node scheduler and the first service node is abnormal, it may indicate that the time for the first service node to reply to the heartbeat data packet to the service node scheduler has expired, in such a case, the first service node may have lost response, and continuing to execute the target task through the first service node invoking the first plug-in process may result in failure in executing the target task, as shown in fig. 8, the service node scheduler on the server 104 may, but is not limited to, maintain the heartbeat connection with the service node 1, the service node 2, and the service node 3 on the server, respectively, and in the case that the heartbeat connection between the service node scheduler and the service node 2 is abnormal, may, but is not limited to, continue executing the task according to the execution process parameters of the task through the new service node.
Optionally, in this embodiment, the first service node may be but not limited to being deployed with a plurality of plugin processes, where the plurality of plugin processes may be but not limited to all executing tasks, and in a case that the first service node is set to keep a heartbeat connection with a service node scheduler on the server, and the heartbeat connection between the service node scheduler and the first service node is abnormal, the execution process parameters of the tasks that are being executed by all plugin processes on the first service node may be but not limited to be acquired from the target storage system by a second service node on the server, and a group of plugin processes is called by the second service node to continue to execute tasks corresponding to all plugin processes on the first service node according to the execution process parameters of the tasks that are being executed by all plugin processes on the first service node.
Optionally, in this embodiment, the second service node is different from the first service node, and the third plug-in process is different from the first plug-in process, and the third plug-in process may, but is not limited to, include one or more plug-in processes, and a type of the one or more plug-in processes for performing tasks may, but is not limited to, include a target type.
By the method, whether the heartbeat connection between the service node and the service node dispatcher is abnormal or not is determined, whether the connection between the service node and the service node dispatcher is lost or not is further determined, and the reliability of the service node is improved.
As an alternative, the method further includes:
s61, detecting whether P consecutive second periods do not receive the heartbeat signals sent by the first service node under the condition that the first service node is set to send the heartbeat signals to the service node dispatcher at every second preset period, wherein P is a positive integer greater than or equal to 2; and under the condition that the heartbeat signals sent by the first service node are not received in the continuous P second periods, determining that the heartbeat connection between the service node scheduler and the first service node is abnormal.
Optionally, in this embodiment, under the condition that the first service node is set to send a heartbeat signal to the service node scheduler every a preset second period, detecting whether T times of failure to receive the heartbeat signal sent by the first service node occur within a preset target duration, where T is a positive integer greater than or equal to 2; and under the condition that T times of heartbeat signals sent by the first service node are not received within a preset target duration, determining that the heartbeat connection between the service node scheduler and the first service node is abnormal.
As shown in fig. 9, the service node scheduler on the server 104 and the service node 2 may be, but are not limited to, maintaining the heartbeat connection, the service node 2 may be, but is not limited to, sending a heartbeat signal to the service node scheduler every preset second period (e.g., 2 minutes, or 10 minutes, or 30 minutes, etc., to which the present application is not limited), detecting a sending time of the heartbeat signal from the service node 2 to the service node scheduler, for example, the sending time of the heartbeat signal from the service node 2 to the service node scheduler may be, but is not limited to, 20:30 including a b month c day, 20:33 a b month c day, and 20:43 a b month c day, in which case the service node scheduler has not received the heartbeat signal sent from the service node 2 for 5 consecutive second periods may indicate that the heartbeat connection of the service node scheduler to the service node 2 is abnormal.
It should be noted that the first period and the second period may be, but not limited to, the same or different, and the present application is not limited thereto. By the method, whether the first service node is abnormal or not is determined according to whether the service node dispatcher receives the heartbeat signal sent by the first service node in real time, and instantaneity of determining whether the first service node is abnormal or not is improved.
As an alternative, the execution process parameters of the target task may be, but are not limited to, obtained from the target storage system through a second service node on the server, and the third plug-in process is invoked by the second service node to continue to execute the target task according to the execution process parameters:
s71, deleting the first service node through the service node dispatcher, creating the second service node, acquiring the execution process parameters of the target task from the target storage system through the second service node, and calling the third plug-in process through the second service node to continue to execute the target task according to the execution process parameters.
Alternatively, in this embodiment, in the case that an abnormality occurs in the service node, the original service node may be, but not limited to, deleted, as shown in fig. 10, a service node scheduler is deployed in the server 104, and the service node scheduler may be, but not limited to, maintain a heartbeat connection with the service node 1, the service node 2, and the service node 3, for example, in the case that it is determined that an abnormality occurs in the heartbeat connection between the service node scheduler and the service node 2, the service node 2 may be, but not limited to, deleted, the service node 4 may be created, and the third plug-in process may be invoked by the service node 4 to continue to execute the target task according to the execution process parameters of the target task.
As an alternative, before the target task is executed by the first service node calling the first plug-in process on the server, the method further includes:
s81, transmitting the target task to a plug-in scheduling service on the server;
s82, distributing the target task to the first plug-in process in a plug-in process set through the plug-in scheduling service, wherein each plug-in process in the plug-in process set is used for executing one type of task in a task type set, and the task type set comprises the target type.
Optionally, in this embodiment, N plug-in processes are used to perform N types of tasks, where each type of task may be, but is not limited to being, performed by multiple plug-in processes, and the creation time of each plug-in process in the multiple plug-in processes may be, but is not limited to being, the same or different.
For example, the plug-in process 1 is used for executing the task of the type 1, the plug-in process 2 is used for executing the task of the type 1, and in the case that the obtained target task sent by the client is the task of the type 1, the target task can be executed together by the plug-in process 1 and the plug-in process 2, but not limited to; the plug-in process 1 may be created, but is not limited to, for executing a part of the tasks in the target task, and in the case that the plug-in process 1 completes executing a part of the tasks in the target task, the plug-in process 2 may be created, but is not limited to, and tasks that are not completed to be executed in the target task may be continued according to the execution process parameters of the target task stored in the target storage system during the process that the plug-in process 1 executes the target task.
In order to balance tasks performed by the respective plug-in processes, the plug-in scheduling service may, but is not limited to, distribute tasks to be performed to appropriate service nodes according to load information of plug-in processes executing the tasks on the respective service nodes, may, but is not limited to, handling task contention by a distributed contention lock, and may, but is not limited to, explaining and explaining the principles of the distributed contention lock by:
step S401, initializing, creating a persistent node in the plug-in scheduling service as a root node of the distributed contention lock. For example, a persistent node named "/locks" is created. It should be noted that there is only one distributed contention lock in total.
In step S402, a lock is requested, and when a client needs to acquire a distributed contention lock, it creates a transient ordered node under the "/locks" node. For example, a short ordered node named "/locks/lock_" is created. The plug-in dispatch service will automatically assign an incremental number to this node, such as "/locks/lock_0000000001".
Step S403, judging whether a lock is acquired, and the client acquires all child nodes under the "/locks" node and sorts the child nodes according to the node numbers. If the node created by the current client is the smallest numbered node, which indicates that the client successfully acquired the lock, it can be understood that only the smallest numbered node acquires the lock, and if the lock is released and the short ordered node is deleted at the node of the previous number of the service node, it can indicate that the current service node acquires the lock.
In step S404, a lock is waited, and if the node created by the current client is not the least numbered node, it is indicated that the distributed contention lock is occupied by other clients. At this point, the client needs to wait for lock release. To avoid that all clients are waiting for a lock, the client only needs to pay attention to the nearest node that is smaller than its own number. The client registers a detection event with the node, and when the node is deleted, the client receives a notification.
In step S405, the lock is released, and when the client with the lock completes the task, the short ordered node created by the client is deleted. This triggers the plug-in dispatch service to send a notification to clients waiting for locks telling them that the lock has been released.
In step S406, the lock is re-contended, and the client receiving the notification will re-acquire all the child nodes under the "/locks" node, and sort according to the node numbers. Then, the steps S403 to S405 are performed until a lock is acquired or the execution task is completed.
In connection with the above explanation and illustration of the distributed contention lock principle, in a distributed system, the use of plug-ins to schedule allocation of service coordination tasks, handle contention relationships for service nodes, and achieve load balancing for service nodes may include, but is not limited to, the following steps:
In step S501, service nodes are registered, and when each service is started, a transient node is created in the plug-in scheduling service, and is used for representing the online state of the service node. For example, a transient node is created under the "/works" node, such as "/works/works_0001". Therefore, when the service node is off line, the corresponding short node can be automatically deleted, so that the online state of the service node can be conveniently detected.
In step S502, task allocation, when a new task (e.g., a multimedia task) needs to be processed, task information is stored under a persistent ordered node of the plug-in scheduling service. For example, a persistent ordered node is created under the "/tasks" node, such as "/tasks/task_0001". The task information may be stored as node data.
In step S503, the task node is detected, and the service node needs to detect the child node change under the "/tasks" node. When a new task is added, all the service nodes are notified. To avoid multiple service nodes processing the same task at the same time, task contention may be handled in a manner that uses a distributed contention lock, but is not limited to.
In step S504, load balancing, to implement load balancing, one load information may be maintained for each service node in the plug-in scheduling service. For example, a persistent node is created under the "/load" node, such as "/load/worker_0001". The node data stores current load information of the service node. When the service node completes the task or accepts a new task, the corresponding load information is updated.
In step S505, a suitable service node is selected, and when a new task needs to be allocated, the service node with the lowest current load can be selected for task allocation according to the load information under the "/load" node. This ensures that tasks are load balanced between the service nodes.
In step S506, after the service node finishes executing the task, the task node needs to be removed from under the "/tasks" node, and the corresponding load information needs to be updated. Meanwhile, the task results may be stored under another persistent node, such as "/results/task_0001".
For example, in the case of distributing a target task to be executed to an appropriate service node, the service node invokes a plug-in process for executing a task of the target type to execute the target task to be executed. As shown in fig. 11, the plug-in scheduling service on the server 104 may be, but is not limited to, for balancing the tasks allocated to the service node 1, the service node 2, and the service node 3, and in the case where the target task to be executed sent by the client is obtained, may be, but is not limited to, allocating the target task to the service node 2 with the smallest load among the service nodes 1 to 3, or in the case where the loads of the service nodes 1 to 3 are close, randomly selecting one service node, for example, the service node 2; service node 2 may assign the target task to a plug-in process 4 in a plug-in process set, where the plug-in process set may include, but is not limited to, plug-in process 1, plug-in process 3, and plug-in process 4, plug-in process 1 is used to perform tasks of type 1, plug-in process 3 is used to perform tasks of type 3, plug-in process 4 is used to perform tasks of type 2, and the task type of the target task is type 2.
Through the steps, the coordination, the competition relationship processing and the load balancing of the multimedia tasks in the distributed system can be realized by using the plug-in scheduling service. The mechanism can effectively ensure the balanced distribution of tasks among the service nodes and improve the processing capacity of the system.
As an alternative, the method further includes:
s91, before the target task is executed by the first service node on the server calling the first plug-in process, the method further includes: creating a target task identifier for the target task; determining whether a task marked as the target task mark is included in a task set which is not executed on the server;
s92, the step of calling a first plug-in process to execute the target task through a first service node on the server comprises the following steps: under the condition that the task set which is not executed on the server does not comprise the task marked as the target task, calling a first plug-in process to execute the target task through a first service node on the server;
s93, the method further comprises: and filtering out the target task under the condition that the task set which is not executed on the server comprises the task identified as the target task.
Optionally, in order to avoid repeatedly executing the same task and reduce waste of computer resources, the target task identifier may be created according to, but not limited to, the time when the client sends the target task, the account number logged in the client, and the target event on the client. For example, in the case where the client includes a client of an online conference application, a task Id (task Identification) may be created based on, but not limited to, parameters user Id (equivalent to an account number of a login client), meeting Id (Identification) carried in a target task, transmission time of the target task, and the like.
In the case where a task identified as a target task is included in a task set that has not yet been executed on the server, it may be indicated that the task identified as the target task is being executed, in which case the target task is filtered out, it may be understood that the target task is not being executed.
In the case that the task identified as the target task identifier is not included in the task set that has not yet been executed on the server, it may be indicated that the task identified as the target task identifier has not yet been executed, and in such a case, the first plug-in process may be invoked to execute the target task through the first service node on the server, but is not limited to. As shown in fig. 12, when a recording task sent by a client is acquired, an identifier 1 may be created for the recording task, and when a task identified as identifier 1 is not included in a task set that has not yet been executed on the server 104, a first plugin process is called by a service node 2 on the server 104 to execute the recording task, where a task a identified as identifier a, a task b identified as identifier b, and a task c identified as identifier c may be included in the task set that has not yet been executed on the server 104.
By the method, the situation that the same task is continuously sent by the client, for example, the task is only executed once in the scene of overtime retry is realized, and the situation that the same task is repeatedly executed is avoided; or under the condition that the nodes corresponding to the server clusters all acquire the tasks sent by the client, the tasks are executed by only one node, and one server cluster can comprise a group of servers, so that redundant tasks are prevented from being executed, and the utilization rate of computer resources is improved.
As an alternative, the method further includes:
s1001, sending an execution result of the target task to the client when the target task is executed.
Optionally, in this embodiment, in a case where the target task is executed, an execution result of the target task is sent to the client, for example, a translation result of the conference content of the current conference is returned, and so on. As shown in fig. 13, a plug-in process 1 to a plug-in process 4 may be deployed on a service node 2 on a server 104, where the plug-in process 1 is used to perform tasks of type 1, the plug-in process 2 is used to perform tasks of type 2, the plug-in process 3 is used to perform tasks of type 3, and the plug-in process 4 is used to perform tasks of type 2.
Recording tasks to be executed, which are sent by the client 102, are obtained on the server 104, wherein the recording tasks are used for executing recording operation on an online conference interface on the client 102. In such a case, the plug-in process 4 may be invoked by, but is not limited to, the service node 2 on the server 104 to perform a recording task, wherein the type of recording task is type 2. In the case of performing the recording task, a result of performing the recording task, for example, a recorded video of an interface including an online conference for a specified time, or the like, is transmitted to the client.
By the method, functions which should be executed at the client, such as recording, translation and the like, are migrated to the cloud as plug-ins to operate, so that the client can be called more flexibly, and the client does not need to pay attention to the problems of the state of plug-in processes, task circulation, data security and the like.
As an alternative, the target task to be performed sent by the client may be obtained on the server by, but not limited to:
s1101, acquiring the target task to be executed, which is sent by the client, on the server, wherein the target task is used for executing the target operation on the target event currently performed on the client.
The task execution method in the embodiment of the present application may be applied to a plurality of scenarios, for example, the online conference application, the game application, and the like, as shown in fig. 14, where the client includes a client of a shooting application, a target task to be executed sent by the client may be, but is not limited to, where an account directly broadcasts a game screen of a game currently being played on the client, and in such a case, the target task may be, but is not limited to, executed in real time by invoking a corresponding plug-in process through a service node on the server, and in a case where the target task is completed, the game screen of the user currently being broadcast and the game screen of the game currently being played by the user currently being broadcast are displayed in real time in the game screen of other accounts of the client.
Or, in the case that the target task to be executed sent by the client includes a game policy for counting all accounts used in a game in which the account is currently in progress, the target task may be executed in real time by calling a corresponding plug-in process through a service node on the server, and in the case that the target task is completed, the game policy for all accounts used in the game in which the account is currently in progress is displayed in a game in which the account is currently in progress.
In such a way, for live broadcasting and recording of games, the functions of live broadcasting and live broadcasting of game pictures are transferred to the cloud, so that the performance pressure of a client is reduced, and meanwhile, the quality and stability of live broadcasting pictures are improved; for game picture identification and analysis, the game picture identification and analysis function is migrated to the cloud, so that functions of real-time achievement statistics, tactical analysis and the like are realized, and richer game experience is provided for players.
As shown in fig. 15, in the case that the client includes a client of an online conference application, the user participates in the online conference through the client 102, and may, but not limited to, display an online conference interface on the terminal device, where other accounts participating in the online conference, such as account a and account B, etc., may, but not limited to, be displayed; the online conference interface may be, but not limited to, displaying conference content of the online conference, the online conference interface may be, but not limited to, displaying a toolbar, the toolbar may be, but not limited to, buttons including mute, record, apply, set up, end conference, etc., in case the application button is clicked, buttons for opening subtitles, viewing transcriptions, and interactive comments may be, but not limited to, displayed, in case the button for viewing the transcription is clicked, it may indicate that it is desirable to transcribe conference content in the conference interface of the online conference, in which case the client 102 may be, but not limited to, send a transcription task to the server 104, where the transcription task is used to perform a transcription operation on conference content in the online conference interface on the client 102.
In order to better understand the task execution method in the embodiment of the present application, the task execution method in the embodiment of the present application is explained and illustrated below in conjunction with an alternative embodiment, which may be, but is not limited to, applicable to the embodiment of the present application.
As shown in fig. 16, the architecture of the Task execution system may include, but is not limited to, a service access layer for taking charge of control of processing media tasks, a distributed Task scheduling system, and a media service node, receiving calls from HTTP (Hypertext Transfer Protocol ) requests and RPC (Remote Procedure Call, remote procedure call) requests, and may be, but is not limited to, for starting execution of tasks (Start Task), updating tasks (Update Task), ending tasks (Stop Task), and so on. The distributed task scheduling system (Distributed Task Deliver System) is used for buffering, distributing and scheduling media tasks, including recording, transcoding, live broadcasting and other functions.
The media service node is used for distributing audio and video data, controlling stream events, managing plug-ins and providing audio and video function support, wherein the media service node comprises a custom plug-in system, and can flexibly access plug-ins such as a translation plug-in (Translator plugin), a Meeting plug-in (Meeting plug-in), a live broadcast (Live broadcast plugin) plug-in and the like according to requirements. The media service nodes may include, but are not limited to, media node clusters (media node cluster) in which distributed task management systems (Distribute task MGR system), woker (equivalent to service nodes), plug in Manager (plug in Manager), and the like are deployed.
As shown in fig. 17, the architecture of the task execution method in the embodiment of the present application may include, but is not limited to, other ends, a cloud end, and a client end, where the other ends keep communication with the cloud end and the client end through a network, and the task to be executed sent by the client end may include, but is not limited to, cloud recording or client end recording, client end notes, cloud live broadcasting, background sending and reviewing, and subtitle summarization, etc., where in this case, according to different task types of the task to be executed sent by the client end, for example, multimedia tasks of recording, cloud live broadcasting, translating, etc., corresponding processing is flexibly invoked on plug-ins of the cloud end for audio and video streams input by the client end, so as to implement supporting different service lines.
For example, for a video stream, for example, an input video (video-stream-in) in yuv (Luminance-Chrominance color encoding format) format flows into a video mixer (video-mixer), then into a media recorder (media-recorder), then into a media encoder (media-recorder), after encoding by the media encoder, resulting in media files (media-file) and H264 (a highly compressed digital video codec standard)/rtmp (Real-Time Messaging Protocol in AAC (Advanced Audio Coding advanced audio coding) format) -public her (publisher), which transcodes the media files by a transcoder (transcoder); or the video flows into the peripheral and then keyword detection is performed.
For audio data, for example, the audio stream may be, but not limited to, streamed to a player (Speaker), then the audio stream in pcm format is streamed to a video mixer (video-mixer), then transmitted to a media recorder, and then the audio stream data is encoded by the media encoder to obtain a corresponding media file and rtmp-publisher; alternatively, the audio stream in PCM (Pulse Code Modulation ) format may be, but is not limited to, streaming into a peripheral, performing translation services, and so forth. It can be understood that, according to different task types of tasks to be executed, such as recording, cloud live broadcasting, translation and the like, sent by the client, for audio and video streams input by the client, plug-in units of the cloud are flexibly called to perform corresponding processing so as to support different service lines.
As shown in fig. 18, communication between the client and the cloud may be maintained, but not limited to, through a network, and tasks to be performed sent by the client may include, but not limited to, MRA (Multimedia Resource Access ) service, background pressure measurement service, and RTMP (Real-Time Messaging Protocol ) push service.
Among them, the MRA service is a multimedia communication service based on an IP (Internet Protocol ) network. MRA services allow users to access and share multimedia resources, such as audio, video, images, etc., over IP networks. For example, assume that one MRA service application is an online music platform. A user may access and play various music assets through the platform, including songs, albums, playlists, and the like. The user may search, browse and select different music assets and play them on his device. MRA services provide a convenient way for users to access and enjoy multimedia resources over IP networks.
RTMP push services refer to services that forward Real-time messaging protocol (RTMP) streaming media content from one source server to another target server. RTMP is a protocol for audio, video and data transmission, commonly used for real-time streaming media transmission. For example, assuming a live platform, the user pushes his video and audio content to the platform's origin server via the RTMP protocol. The RTMP push service on the platform would then forward the content to multiple target servers so that the viewer can watch the live broadcast through different channels. In this way, the viewer can receive real-time video and audio streams through multiple target servers provided by the platform, enabling a high quality live experience. The RTMP push service functions in this example to transfer live content from the source server to the target server.
In the case of video conferencing, audio and video streams may be input from a client, but not limited to, data initiated by h.323 (h.323 audio video protocol ), sip (Session initialization Protocol, session initiation protocol), etc. protocols may be included, then some processing of the cloud is performed, for example, the video stream of the video conference is input through an MRA connector, then the video stream is input to a peripheral device, and then a corresponding plug-in is called to perform tasks, so as to obtain an execution result, and the execution result may be but not limited to output (video/audio-stream-out) including video or audio; or the audio stream of MP4 (Moving Picture Experts Group) is input through a media player (media-layer), or the video stream of an event is input through a media player (media-layer), different plug-ins are called to complete different tasks, and an execution result is obtained, and the execution result can be but not limited to output (video/audio-stream-out) including the video stream or the audio stream, and then the execution result is returned to the client through a network.
As shown in fig. 19, the recording operation performed on the online conference interface of the online conference in which the account is added may be explained and illustrated by taking an example that the target task includes a multimedia task, the plug-in scheduling service includes a Zookeeper, the service node scheduler includes a Manager, the target storage system includes a distributed redis, the service node includes a Worker, the target task identifier includes a task id, and the acquired task to be performed sent by the client terminal is included.
The user side submits the multimedia task to the server, and the task is transferred to the Manager. The Manager creates a task id according to the task requirement and the resource condition of the server and the parameters userid, meetingId in the task, and filters the repeated multimedia task by inquiring whether the task id exists or not by using the distributed redis, and transmits the filtered multimedia task to the Zookeeper cluster distributed coordination service. The Zookeeper cluster assigns tasks to the appropriate workbench. After the workbench receives the task, a proper plug-in process is selected for processing according to the task type, such as translation, conference, live broadcast and the like. The plug-in process processes multimedia tasks of the corresponding task type, for example: recording, live broadcasting and the like, and simultaneously synchronizing the memory state (which can but is not limited to including the execution process parameters of the task) to the Redis in real time and returning the processing result to the Worker when the media data, the process state data, the heartbeat time stamp, the task instruction and the like of the Worker are updated. In order to ensure service reliability and recover in extreme cases, and enable manager to sense the health status of the current process, a Worker and a plug-in process locally share a memory, refresh at regular time, and determine that a plug-in process crashes when a certain plug-in process reaches a certain number of times without updating the memory, in this case, the memory status of the plug-in process needs to be retrieved from Redis. The Worker pulls up the new plug-in process and restores the memory state retrieved from Redis to the new plug-in process. The new plug-in process continues to process the previous task. Meanwhile, the Manger and the workbench keep heartbeat connection, and after the fact that the workbench does not respond, a new workbench can be pulled up and the memory state is restored, so that the continuity of tasks and the reliability of the system are ensured. The Worker returns the processing result to the Manager. And returning the processing result to the user by the Manager, and timing and synchronizing the system memory state to the distributed database. In the event that the recording task is performed, the media data (media data) may be transferred to the media CDN (Content Delivery Network ) by, but not limited to, RTMP push or delivery.
Through the steps, the workbench schedules plug-in processes for the multimedia tasks, improves the processing efficiency of the multimedia tasks, meets the real-time requirement, and synchronizes the memory state of the workbench in real time by using Redis, thereby ensuring the reliability of the system. Even if the plug-in process or the workbench fails, the prior memory state can be recovered from the Redis, and the system memory and the distributed database are synchronized in time, so that the durability and the restorability of the system are ensured. Even if the system fails, the memory state of the whole system can be recovered from the database.
In summary, the technical scheme of the application optimizes the defects of the prior art, improves the processing efficiency of the multimedia task, and enhances the reliability and the restorability of the system.
As shown in fig. 20, the task execution method in the embodiment of the present application may be implemented, but is not limited to, by:
step S2001, a Task (Submit Task).
In step S2002, task information (Update Task info) is updated.
Step S2003, notification of task allocation (Notify Task Assignment).
In step S2004, a Task (Process Task) is processed.
Step S2005, plug-in crashes (plug Crash).
In step S2006, the memory state is retrieved (Retrieve Memory State).
In step S2007, a new plug-in is started and the memory state is restored (Launch New Plugin and Restore Memory State).
Step S2008, the processing task is continued (Continue Processing Task).
In step S2009, the result and the synchronized memory state are returned (Return Result and sync Memory State).
In step S2010, the result and the periodically synchronized memory state are returned (Return Result and Periodic sync Memory State).
According to the method, the cloud integration is realized through flexible dynamic function assembly, the client and the rear end are opened, different service lines are supported, the functions which are originally executed at the client, such as recording, translation and the like, are migrated to the cloud to serve as plug-ins for operation through writing the service sent by the client in the multimedia node, so that the client can be more flexibly called without paying attention to the problems of process states, task flows, data safety and the like, the memory states of plug-in processes in the multimedia node and the node are synchronized in real time through the Redis cluster, the new plug-in process is conveniently pulled up after the plug-in process crashes, and the memory state is restored to the new plug-in process. The new plug-in process can continue to process the previous tasks, thereby significantly improving the stability and reliability of the system.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
According to another aspect of the embodiment of the application, a device for implementing the task execution method is also provided. As shown in fig. 21, the apparatus includes:
an obtaining unit 2102, configured to obtain, on a server, a target task to be executed sent by a client, where the target task is used to execute a target operation on a target event on the client;
the first executing unit 2104 is configured to invoke, by using a first service node on the server, a first plug-in process to execute the target task, where the first plug-in process is configured to execute a task of a target type, and the type of the target task is the target type;
And the second execution unit 2106 is configured to obtain, when the first plug-in process is abnormal, an execution process parameter of the target task from a target storage system through the first service node, and call, through the first service node, a second plug-in process to continue to execute the target task according to the execution process parameter, where, during execution of the target task by the first plug-in process, the execution process parameter is stored in the target storage system, and the first plug-in process is configured to store, in a target memory, an update time of the execution process parameter of the target task during execution of the target task, and the second plug-in process is used to execute the task of the target type.
According to the embodiment of the application, in the process of calling the plug-in process to execute the task through the service node of the server, the execution process parameters of the task are stored in the storage system, the execution process parameters of the task can be obtained from the storage system under the condition that the plug-in process originally executing the task is abnormal, and the new plug-in process is called by the service node on the server to continue to execute the task according to the execution process parameters.
As an alternative, the apparatus further comprises:
a first detection unit configured to detect whether N consecutive first cycles do not store the execution process parameters of the target task in the target storage system in the process of executing the target task, where N is a positive integer greater than or equal to 2, in a case where the first plug-in process is set to store the execution process parameters of the target task in the target storage system in every preset first cycle; determining that an abnormality occurs in the first plug-in process in the case that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the continuous N first periods; or a second detecting unit, configured to detect whether M times of failing to store the execution process parameters of the target task in the process of executing the target task in the target storage system occur within a preset duration, where M is a positive integer greater than or equal to 1, in a case where the first plug-in process is set to store the execution process parameters of the target task in the target storage system in the process of executing the target task every preset first period; and determining that the first plug-in process is abnormal under the condition that M times of occurrence of the condition that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the preset time period.
As an alternative, the first detecting unit includes:
the detection module is used for detecting whether the update time of the execution process parameters stored in the target memory changes or not through the continuous N first periods of the first service node under the condition that the first service node and the first plug-in process are set to share the same target memory;
and the determining module is used for determining that the continuous N first periods do not store the execution process parameters of the target task in the process of executing the target task to the target storage system when the continuous N first periods detect that the update time of the execution process parameters stored in the target memory is unchanged through the first service node.
As an alternative, the second execution unit includes:
a first execution module, configured to, when a set of execution process parameters is stored in the target storage system, obtain, by the first service node, the set of execution process parameters from the target storage system, and call, by the first service node, the second plug-in process to continue to execute the target task according to the set of execution process parameters, where the set of execution process parameters are the execution process parameters at a plurality of moments in the process of executing the target task; or the second execution module is configured to, when the set of execution process parameters is stored in the target storage system, obtain, by using the first service node, the execution process parameter at a last time in the set of execution process parameters from the target storage system, and call, by using the first service node, the second plug-in process to continue to execute the target task according to the execution process parameter at the last time.
As an alternative, the second execution unit includes:
and the third execution module is used for ending the first plug-in process through the first service node, creating the second plug-in process, and calling the second plug-in process to continue to execute the target task according to the execution process parameters.
As an alternative, the second execution unit includes:
and the fourth execution module is used for acquiring the execution process parameters of the target task from the target storage system through the second service node on the server and calling a third plug-in process through the second service node to continue executing the target task according to the execution process parameters under the condition that the first service node is set to be in heartbeat connection with the service node scheduler on the server and the heartbeat connection of the service node scheduler and the first service node is abnormal.
As an alternative, the apparatus further comprises:
a third detecting unit, configured to detect whether P consecutive second periods do not receive a heartbeat signal sent by the first service node, where P is a positive integer greater than or equal to 2, when the first service node is set to send a heartbeat signal to the service node scheduler every a preset second period; and under the condition that the heartbeat signals sent by the first service node are not received in the continuous P second periods, determining that the heartbeat connection between the service node scheduler and the first service node is abnormal.
As an alternative, the fourth execution module is configured to:
deleting the first service node through the service node dispatcher, creating the second service node, acquiring the execution process parameters of the target task from the target storage system through the second service node, and calling the third plug-in process through the second service node to continue to execute the target task according to the execution process parameters.
As an alternative, the apparatus further comprises:
the transmission unit is used for transmitting the target task to a plug-in scheduling service on the server before the target task is executed by calling a first plug-in process through a first service node on the server;
the allocation unit is used for allocating the target task to the first plug-in process in a plug-in process set through the plug-in scheduling service, wherein each plug-in process in the plug-in process set is used for executing one type of task in a task type set, and the task type set comprises the target type.
As an alternative, before the target task is executed by the first service node on the server by calling a first plug-in process, the apparatus further includes: the creation unit is used for creating a target task identifier for the target task; the determining unit is used for determining whether a task identified as the target task identification is included in a task set which is not executed on the server;
The first execution unit includes: a fifth execution module, configured to invoke, by a first service node on the server, a first add-in process to execute the target task when a task identified as the target task identifier is not included in a task set that has not yet been executed on the server;
the apparatus further comprises: and the filtering unit is used for filtering the target task under the condition that the task set which is not executed on the server comprises the task marked as the target task.
As an alternative, the apparatus further comprises:
and the sending unit is used for sending the execution result of the target task to the client under the condition that the target task is executed.
As an alternative, the acquiring unit includes:
the acquisition module is used for acquiring the target task to be executed, which is sent by the client, on the server, wherein the target task is used for executing the target operation on the target event currently performed on the client.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the task performing method, which may be a terminal device or a server shown in fig. 1. The present embodiment is described taking the electronic device as a server as an example. As shown in fig. 22, the electronic device comprises a memory 2202 and a processor 2204, the memory 2202 having stored therein a computer program, the processor 2204 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client;
s2, calling a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type;
s3, under the condition that the first plug-in process is abnormal, acquiring an execution process parameter of the target task from a target storage system through the first service node, and calling a second plug-in process through the first service node to continuously execute the target task according to the execution process parameter, wherein the execution process parameter is stored in the target storage system in the process of executing the target task by the first plug-in process, the first plug-in process is set to store the update time of the execution process parameter of the target task in the process of executing the target task into a target memory, and the second plug-in process is used for executing the target type task.
Alternatively, as will be appreciated by those skilled in the art, the structure shown in fig. 22 is merely illustrative, and the electronic device may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, or other terminal devices. Fig. 22 does not limit the structure of the electronic device and the electronic apparatus described above. For example, the electronics can also include more or fewer components (e.g., network interfaces, etc.) than shown in fig. 22, or have a different configuration than shown in fig. 22 of fig. 1.
The memory 2202 may be used to store software programs and modules, such as program instructions/modules corresponding to the task execution methods and apparatuses in the embodiments of the present invention, and the processor 2204 executes the software programs and modules stored in the memory 2202, thereby executing various functional applications and data processing, that is, implementing the task execution methods described above. Memory 2202 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 2202 may further include memory remotely located relative to the processor 2204, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 2202 may specifically, but not limited to, store information such as sample characteristics of the item and a target virtual resource account number. As an example, as shown in fig. 22, the memory 2202 may include, but is not limited to, the acquisition unit 2102, the first execution unit 2104, and the second execution unit 2106 in the task performing device. In addition, other module units in the task execution device may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 2206 is configured to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 2206 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network equipment and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 2206 is a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In addition, the electronic device further includes: a display 2208 for displaying the order information to be processed; and a connection bus 2210 for connecting the respective module components in the above-described electronic device.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. The nodes may form a peer-to-peer network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the peer-to-peer network.
According to one aspect of the present application, there is provided a computer program product comprising a computer program/instruction containing program code for executing the method shown in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 2309, and/or installed from a removable medium 2311. When executed by the central processor 2301, performs the various functions provided by embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
Fig. 23 schematically shows a block diagram of a computer system of an electronic device for implementing an embodiment of the application. It should be noted that, the computer system 2300 of the electronic device illustrated in fig. 23 is only an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present application. As shown in fig. 23, the computer system 2300 includes a central processor 2301 (Central Processing Unit, CPU) that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory 2302 (ROM) or a program loaded from a storage portion 2308 into a random access Memory 2303 (Random Access Memory, RAM). In the random access memory 2303, various programs and data required for system operation are also stored. The cpu 2301, the rom 2302, and the ram 2303 are connected to each other via a bus 2304. An Input/Output interface 2305 (i.e., an I/O interface) is also connected to the bus 2304.
The following components are connected to the input/output interface 2305: an input section 2306 including a keyboard, mouse, and the like; an output portion 2307 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker; a storage section 2308 including a hard disk or the like; and a communication section 2309 including a network interface card such as a local area network card, a modem, or the like. The communication section 2309 performs communication processing via a network such as the internet. The driver 2310 is also connected to the input/output interface 2305 as needed. A removable medium 2311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 2310, so that a computer program read out therefrom is installed into the storage section 2308 as needed.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 2309, and/or installed from a removable medium 2311. The computer programs, when executed by the central processor 2301, perform the various functions defined in the system of the present application.
According to one aspect of the present application, there is provided a computer-readable storage medium, from which a processor of a computer device reads the computer instructions, the processor executing the computer instructions, causing the computer device to perform the methods provided in the various alternative implementations of the above embodiments.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client;
s2, calling a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type;
s3, under the condition that the first plug-in process is abnormal, acquiring an execution process parameter of the target task from a target storage system through the first service node, and calling a second plug-in process through the first service node to continuously execute the target task according to the execution process parameter, wherein the execution process parameter is stored in the target storage system in the process of executing the target task by the first plug-in process, the first plug-in process is set to store the update time of the execution process parameter of the target task in the process of executing the target task into a target memory, and the second plug-in process is used for executing the target type task.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (16)

1. A method of performing a task, comprising:
acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client;
invoking a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type;
under the condition that the first plug-in process is abnormal, acquiring an execution process parameter of the target task from a target storage system through the first service node, and calling a second plug-in process through the first service node to continuously execute the target task according to the execution process parameter, wherein the execution process parameter is stored in the target storage system in the process of executing the target task by the first plug-in process, the first plug-in process is set to store the update time of the execution process parameter of the target task in the process of executing the target task into a target memory, and the second plug-in process is used for executing the target type task.
2. The method according to claim 1, wherein the method further comprises:
detecting whether or not the execution process parameters of the target task in the process of executing the target task are not stored in the target storage system for N consecutive first cycles, in the case that the first plug-in process is set to store the execution process parameters of the target task in the target storage system in the process of executing the target task every preset first cycle, wherein N is a positive integer greater than or equal to 2; determining that an abnormality occurs in the first plug-in process in the case that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the continuous N first periods; or alternatively
Detecting whether M times of non-storing the execution process parameters of the target task in the process of executing the target task are generated in a preset duration in the target storage system under the condition that the first plug-in process is set to store the execution process parameters of the target task in the target storage system in the process of executing the target task every preset first period, wherein M is a positive integer greater than or equal to 1; and determining that the first plug-in process is abnormal under the condition that M times of occurrence of the condition that the execution process parameters of the target task are not stored in the target storage system in the process of executing the target task in the preset time period.
3. The method of claim 2, wherein the detecting whether N consecutive first cycles do not store the execution process parameters of the target task during execution of the target task to the target storage system comprises:
detecting whether the update time of the execution process parameter stored in the target memory changes or not by the first service node continuously N first periods under the condition that the first service node and the first plug-in process are set to share the same target memory;
and when detecting that the update time of the execution process parameter stored in the target memory is unchanged through the first service node in N continuous first periods, determining that the execution process parameter of the target task in the process of executing the target task is not stored in the target storage system in N continuous first periods.
4. The method according to claim 1, wherein the obtaining, by the first service node, the execution process parameter of the target task from a target storage system, and invoking, by the first service node, a second plug-in process to continue executing the target task according to the execution process parameter, includes:
Acquiring a set of execution process parameters from the target storage system through the first service node under the condition that the set of execution process parameters are stored in the target storage system, and calling the second plug-in process through the first service node to continue to execute the target task according to the set of execution process parameters, wherein the set of execution process parameters are the execution process parameters at a plurality of moments in the process of executing the target task; or alternatively
And under the condition that the set of execution process parameters are stored in the target storage system, acquiring the execution process parameters at the last time in the set of execution process parameters from the target storage system through the first service node, and calling the second plug-in process through the first service node to continuously execute the target task according to the execution process parameters at the last time.
5. The method of claim 1, wherein invoking, by the first service node, a second plug-in process to continue to perform the target task in accordance with the execution process parameter comprises:
and ending the first plug-in process through the first service node, creating the second plug-in process, and calling the second plug-in process to continue to execute the target task according to the execution process parameters.
6. The method of claim 1, wherein invoking, by the first service node, a second plug-in process to continue to perform the target task in accordance with the execution process parameter comprises:
and under the condition that the first service node is set to keep heartbeat connection with a service node scheduler on the server and the heartbeat connection between the service node scheduler and the first service node is abnormal, acquiring an execution process parameter of the target task from the target storage system through a second service node on the server, and calling a third plug-in process through the second service node to continue executing the target task according to the execution process parameter.
7. The method of claim 6, wherein the method further comprises:
detecting whether P consecutive second periods do not receive the heartbeat signals sent by the first service node under the condition that the first service node is set to send the heartbeat signals to the service node dispatcher at every preset second period, wherein P is a positive integer greater than or equal to 2; and under the condition that the heartbeat signals sent by the first service node are not received in the continuous P second periods, determining that the heartbeat connection between the service node scheduler and the first service node is abnormal.
8. The method according to claim 6, wherein the obtaining, by the second service node on the server, the execution process parameter of the target task from the target storage system, and invoking, by the second service node, a third plug-in process to continue executing the target task according to the execution process parameter, includes:
deleting the first service node through the service node dispatcher, creating the second service node, acquiring the execution process parameters of the target task from the target storage system through the second service node, and calling the third plug-in process through the second service node to continue to execute the target task according to the execution process parameters.
9. The method according to any one of claims 1 to 8, wherein prior to said invoking a first plug-in process to perform said target task by a first service node on said server, said method further comprises:
transmitting the target task to a plug-in scheduling service on the server;
and distributing the target task to the first plug-in process in a plug-in process set through the plug-in scheduling service, wherein each plug-in process in the plug-in process set is used for executing one type of task in a task type set, and the task type set comprises the target type.
10. The method according to any one of claims 1 to 8, wherein,
before the target task is executed by the first service node on the server calling a first plug-in process, the method further includes: creating a target task identifier for the target task; determining whether a task marked as the target task mark is included in a task set which is not executed on the server;
the step of calling a first plug-in process to execute the target task through a first service node on the server comprises the following steps: under the condition that the task set which is not executed on the server does not comprise the task marked as the target task, calling a first plug-in process to execute the target task through a first service node on the server;
the method further comprises the steps of: and filtering out the target task under the condition that the task set which is not executed on the server comprises the task identified as the target task.
11. The method according to any one of claims 1 to 8, further comprising:
and sending an execution result of the target task to the client under the condition that the target task is executed.
12. The method according to any one of claims 1 to 8, wherein the obtaining, on the server, the target task to be performed sent by the client, includes:
and acquiring the target task to be executed, which is sent by the client, on the server, wherein the target task is used for executing the target operation on the target event currently performed on the client.
13. A task execution device, characterized by comprising:
the system comprises an acquisition unit, a target task processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target task to be executed, which is sent by a client, on a server, wherein the target task is used for executing target operation on a target event on the client;
the first execution unit is used for calling a first plug-in process to execute the target task through a first service node on the server, wherein the first plug-in process is used for executing a task of a target type, and the type of the target task is the target type;
the second execution unit is configured to obtain, when an exception occurs in the first add-on process, an execution process parameter of the target task from a target storage system through the first service node, and call, through the first service node, a second add-on process to continue to execute the target task according to the execution process parameter, where in a process of executing the target task by the first add-on process, the execution process parameter is stored in the target storage system, and the first add-on process is configured to store, in a target memory, an update time of the execution process parameter of the target task in a process of executing the target task, and the second add-on process is used to execute the task of the target type.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 12.
15. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 12.
16. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 12 by means of the computer program.
CN202311307561.1A 2023-10-10 2023-10-10 Task execution method and device, storage medium and electronic equipment Pending CN117056056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311307561.1A CN117056056A (en) 2023-10-10 2023-10-10 Task execution method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311307561.1A CN117056056A (en) 2023-10-10 2023-10-10 Task execution method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117056056A true CN117056056A (en) 2023-11-14

Family

ID=88666685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311307561.1A Pending CN117056056A (en) 2023-10-10 2023-10-10 Task execution method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117056056A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189641A (en) * 2018-08-29 2019-01-11 郑州云海信息技术有限公司 A kind of the background task monitoring method and device of distributed system
CN111857987A (en) * 2020-06-19 2020-10-30 苏州浪潮智能科技有限公司 ICM-based abnormal task processing method and device
CN112181915A (en) * 2019-07-03 2021-01-05 成都华为技术有限公司 Method, device, terminal and storage medium for executing service
WO2022151668A1 (en) * 2021-01-15 2022-07-21 长鑫存储技术有限公司 Data task scheduling method and apparatus, storage medium, and scheduling tool
CN116781764A (en) * 2022-03-09 2023-09-19 腾讯科技(深圳)有限公司 Long-connection task execution method and device and related equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189641A (en) * 2018-08-29 2019-01-11 郑州云海信息技术有限公司 A kind of the background task monitoring method and device of distributed system
CN112181915A (en) * 2019-07-03 2021-01-05 成都华为技术有限公司 Method, device, terminal and storage medium for executing service
CN111857987A (en) * 2020-06-19 2020-10-30 苏州浪潮智能科技有限公司 ICM-based abnormal task processing method and device
WO2022151668A1 (en) * 2021-01-15 2022-07-21 长鑫存储技术有限公司 Data task scheduling method and apparatus, storage medium, and scheduling tool
CN116781764A (en) * 2022-03-09 2023-09-19 腾讯科技(深圳)有限公司 Long-connection task execution method and device and related equipment

Similar Documents

Publication Publication Date Title
US10911789B2 (en) Automatic failover for live video streaming
US11350139B2 (en) Video live broadcast method and apparatus
US9473677B2 (en) Method and server system for synchronization of audio/video media files
CN108540868B (en) Processing method, device, server, terminal and the storage medium of HLS live streaming
JP5666477B2 (en) Server-side support for seamless rewind and playback of video streaming
US10114689B1 (en) Dynamic playlist generation
US20090249222A1 (en) System and method for simultaneous media presentation
US20080160911A1 (en) P2P-based broadcast system and method using the same
CN103002274A (en) Mobile multimedia real-time transcoding play system and method based on offline download
CN108512814B (en) Media data processing method, device and system
EP3471421B1 (en) Live broadcast video replay method, server, and system
CN112584087B (en) Video conference recording method, electronic device and storage medium
US20180124445A1 (en) Massive file and data object replicator
CN113157418A (en) Server resource allocation method and device, storage medium and electronic equipment
CN110149524B (en) Live stream slicing system, live stream slicing method, live stream slicing device and readable medium
Kumar et al. Cloud enabled media streaming using Amazon Web Services
CN111385593A (en) Cross-platform live content synchronization method and device, storage medium and server
CN113630618A (en) Video processing method, device and system
CN114979695B (en) SRS-based multi-process live broadcast method and device, electronic equipment and storage medium
CN117056056A (en) Task execution method and device, storage medium and electronic equipment
US20120117182A1 (en) Content distribution system
CN113099247B (en) Virtual resource processing method, device, server, storage medium and program product
CN106302617B (en) Data processing method and device applied to computing device
CN112887786B (en) Video playing method and device and computer readable medium
CN110049360A (en) Cross-platform contents controlling method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination