CN111813530B - Data processing method, device, computing equipment and medium - Google Patents

Data processing method, device, computing equipment and medium Download PDF

Info

Publication number
CN111813530B
CN111813530B CN202010809981.XA CN202010809981A CN111813530B CN 111813530 B CN111813530 B CN 111813530B CN 202010809981 A CN202010809981 A CN 202010809981A CN 111813530 B CN111813530 B CN 111813530B
Authority
CN
China
Prior art keywords
task
asynchronous
processing
data model
notification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010809981.XA
Other languages
Chinese (zh)
Other versions
CN111813530A (en
Inventor
杨家祺
刘炼
邢韬
陶韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010809981.XA priority Critical patent/CN111813530B/en
Publication of CN111813530A publication Critical patent/CN111813530A/en
Application granted granted Critical
Publication of CN111813530B publication Critical patent/CN111813530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a data processing method, which can be used in the field of distributed technology. The method comprises the following steps: acquiring an asynchronous task instruction submitted by a user; determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, wherein the asynchronous task data model set comprises a task submitting data model and a task processing data model; generating an asynchronous processing task according to the task submission data model and the asynchronous task instruction; and executing the asynchronous processing task according to the task processing data model to obtain a processing result. The present disclosure also provides an apparatus for data processing, a computing device, and a computer storage medium.

Description

Data processing method, device, computing equipment and medium
Technical Field
The present disclosure relates to the field of distributed technology, and more particularly, to a method, apparatus, computing device, and medium for data processing.
Background
Various asynchronous tools, such as message queues, local asynchronous threads, etc., are typically used in software design and development, which can decouple relatively time-consuming operations, helping us to more reasonably control the system.
In an actual business scenario there are a number of similar asynchronous operations, such as performing complex calculations, retrieving data from third party data sources, etc. The existing asynchronous tools cannot be used universally, repeated design and development are needed for each time of newly added asynchronous processing requirements, and more human resources are needed to be consumed.
Disclosure of Invention
One aspect of the present disclosure provides a method of data processing, comprising: acquiring an asynchronous task instruction submitted by a user; determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, wherein the asynchronous task data model set comprises a task submitting data model and a task processing data model; generating an asynchronous processing task according to the task submission data model and the asynchronous task instruction; and executing the asynchronous processing task according to the task processing data model to obtain a processing result
Optionally, the task submission data model includes: a submission mode identification and a submission parameter; the generating an asynchronous processing task according to the task submission data model and the asynchronous task instruction comprises the following steps: determining an analysis mode according to the submission mode identification and the submission parameters; analyzing the asynchronous task instruction in the analysis mode to obtain task information; and generating the asynchronous processing task according to the task information.
Optionally, the task processing data model includes: a processing mode identifier, a processing program identifier and a processing parameter; and executing the asynchronous processing task according to the task processing data model to obtain a processing result, wherein the processing result comprises the following steps: determining an executor program according to the processing mode identifier, the processing program identifier and the processing parameters; and executing the asynchronous processing task by using the executor program to obtain a processing result.
Optionally, the asynchronous task data model set further comprises: a task notification data model; the method further comprises; generating a task result notification according to the task notification data model; and displaying the task result notification to the user.
Optionally, the task notification data model includes: a notification template and notification parameters; the generating task result notification according to the task notification data model comprises the following steps: determining a target position in the notification template according to the notification parameters; and writing the task result notification into a target position in the notification template to obtain the task result notification.
Another aspect of the present disclosure provides an apparatus for data processing, comprising: the acquisition module is used for acquiring an asynchronous task instruction submitted by a user; the determining module is used for determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, wherein the asynchronous task data model set comprises a task submitting data model and a task processing data model; the submitting module is used for generating an asynchronous processing task according to the task submitting data model and the asynchronous task instruction; and the processing module is used for executing the asynchronous processing task according to the task processing data model so as to obtain a processing result.
Optionally, the task submission data model includes: a submission mode identification and a submission parameter; the submitting module comprises: the first determining submodule is used for determining an analysis mode according to the submission mode identification and the submission parameters; the first analysis submodule is used for analyzing the asynchronous task instruction in the analysis mode to obtain task information; and the first generation sub-module is used for generating the asynchronous processing task according to the task information.
Optionally, the task processing data model includes: a processing mode identifier, a processing program identifier and a processing parameter; the processing module comprises: the second determining submodule is used for determining an executor program according to the processing mode identifier, the processing program identifier and the processing parameter; and the execution sub-module is used for executing the asynchronous processing task by utilizing the executor program so as to obtain a processing result.
Optionally, the asynchronous task data model set further comprises: a task notification data model; the apparatus further comprises; the notification module is used for generating a task result notification according to the task notification data model; and the display module is used for displaying the task result notification to the user.
Optionally, the task notification data model includes: a notification template and notification parameters; the notification module includes: a third determining submodule, configured to determine a target position in the notification template according to the notification parameter; and the writing sub-module is used for writing the task result notification into the target position in the notification template so as to obtain the task result notification.
Another aspect of the present disclosure provides a computing device comprising: one or more processors; and a storage means for storing one or more programs, which when executed by the one or more processors cause the one or more processors to implement the methods as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as described above.
According to the embodiment of the disclosure, an asynchronous processing task is generated according to an asynchronous task instruction and an asynchronous task data model set corresponding to the asynchronous task instruction, and the asynchronous processing task is executed according to a task processing data model, so that a universal asynchronous processing method is realized.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 schematically illustrates a system architecture of a method of data processing and an apparatus of data processing according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method of data processing according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram for generating asynchronous processing tasks according to an embodiment of the disclosure;
FIG. 4 illustrates a flow chart of performing asynchronous processing tasks according to an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of a method of data processing according to another embodiment of the present disclosure;
FIG. 6 illustrates a flow chart of a method of adding a new set of asynchronous task data models in accordance with another embodiment of the present disclosure;
FIGS. 7-8 are flowcharts illustrating an asynchronous task execution method according to another embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of an apparatus for data processing according to an embodiment of the disclosure;
FIG. 10 schematically illustrates a block diagram of an apparatus for data processing according to an embodiment of the disclosure; and
Fig. 11 schematically illustrates a block diagram of a computer system suitable for implementing the methods described above, according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
Embodiments of the present disclosure provide a data processing method and an apparatus for data processing capable of applying the same. The method comprises the steps of obtaining an asynchronous task instruction submitted by a user and an asynchronous task data model set corresponding to the asynchronous task instruction, wherein asynchronous task data comprises a task submitting data model and a task processing data model; generating an asynchronous processing task according to a task submitting data model and an asynchronous task instruction; and executing the asynchronous processing task according to the task processing data model to obtain a processing result.
Fig. 1 schematically illustrates a system architecture of a data processing method and a data processing apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, the system architecture according to this embodiment may include an asynchronous model customizing device 11, an asynchronous task submitting device 12, an asynchronous task processing device 13, an asynchronous task notifying device 14, and an asynchronous task data device 15.
The asynchronous model customizing device 11 can be used for controlling the asynchronous task submitting device 12, the asynchronous task processing device 13 and the asynchronous task notifying device 14 to carry out the processing flow of the asynchronous task, and the data in the whole flow is stored in the asynchronous task data device 15.
The asynchronous model customizing means 11 may be used for customizing the data model required for the asynchronous process. For example, the data model can be customized according to the following two types of information, namely, basic information such as names, descriptions, CRON (planning tasks), start-stop time and the like; and secondly, flow information such as devices and parameters for task submission, task processing and task notification.
The asynchronous task submission apparatus 12 may be configured to implement the task submission method customized in the asynchronous model customization apparatus 11, and to convert information submitted by a user into tasks storable in a database. The information submitted by the user can be, for example, an http request, an uploading file and the like, the submitted information of the asynchronous task is taken out through the cooperation of an analysis mode and device parameters, a new asynchronous processing task is created, and the asynchronous processing task is stored in a database for subsequent processing. Asynchronous task submission apparatus 12 may support multiple submission modes through extensions.
The asynchronous task processing device 13 may be configured to execute the asynchronous processing task generated by the asynchronous task submitting device 12 to obtain a processing result. Wherein the processing means used in executing the asynchronous processing is taken from the method of customization in the asynchronous model customizing means 11. Asynchronous processing is typically a relatively time consuming operation such as data computation and data query, and the asynchronous task scheduler 13 may access Hadoop and ELASTICSEARCH systems to provide high performance computing and querying capabilities for the asynchronous task scheduler 13. After the asynchronous process is completed, the system writes the process result into the asynchronous task data device 15.
The asynchronous task notification means 14 may be used to notify the relevant user of the processing results in the device asynchronous task processing means 13. The notification method is taken from the methods specified in the asynchronous model customizing device 11. The asynchronous task notification appliance 14 supports flexible extensions to the notification mode and notification content.
The asynchronous task data device 15 may be used for data storage, for example, may be used for recording task model information, asynchronous task recording information, asynchronous task processing result information, and the like.
Fig. 2 illustrates a flow chart of a method of data processing according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
Specifically, in operation S210, an asynchronous task instruction submitted by a user is acquired.
In operation S220, an asynchronous task data model set corresponding to the asynchronous task instruction is determined according to the asynchronous task instruction.
According to the embodiment of the disclosure, the asynchronous task data model set corresponding to different asynchronous tasks can be preset. Each asynchronous task data model set may include a plurality of data models in which various configuration information required for an asynchronous task corresponding to the asynchronous task data model set is recorded. The data models of the asynchronous task data model set may include, for example, a task submission data model and a task processing data model. The task submission data model may include, for example, configuration items, such as submission mode identification and submission parameters, etc., that are required when submitting an asynchronous task. The task processing data model may, for example, process configuration items such as process mode identification, handler identification, and process parameters, etc., that are required when processing asynchronous tasks.
According to another embodiment of the present disclosure, the asynchronous task data model set may also include a task base data model. The task base data model may be used to record base information of the task, such as identification, etc. The set of asynchronous task data models corresponding to the asynchronous task instructions may be determined by comparing the task identifications in the asynchronous task instructions with identifications in the respective task base data models. If the task identification in the asynchronous task instruction is matched with the identification in a certain task basic data model, the asynchronous task data model set to which the task basic data model belongs is the asynchronous task data model set corresponding to the asynchronous task instruction.
In operation S230, an asynchronous processing task is generated according to the task submission data model and the asynchronous task instruction.
FIG. 3 illustrates a flow chart for generating asynchronous processing tasks according to an embodiment of the disclosure. As shown in fig. 3, operation S230 may include operations S331 to S333, for example.
Specifically, in operation S331, the parsing scheme is determined according to the commit scheme identification and the commit parameter.
In operation S332, the asynchronous task instruction is parsed in an parsing manner to obtain task information.
In operation S333, an asynchronous processing task is generated from the task information.
According to embodiments of the present disclosure, the commit mode identification may be used to describe the commit mode of the task, e.g., http request, upload file, etc. The submission parameters are relevant parameters of the submission mode.
For example, for the manner of submitting through an http request, the submitting manner is identified as an http request, and the submitting parameter is an application program interface corresponding to the request. The application program interface can acquire the http request submitted by the user, then analyze the request in an analysis mode of the http request to obtain task information, and generate an asynchronous processing task according to the task information.
For a manner of submitting by uploading a file, the submitting manner is identified as uploading the file, and the submitting parameter can be, for example, an uploading directory of the file. The file is acquired by monitoring the uploading catalogue when the user uploads the file to the appointed catalogue, then the file is analyzed in an analysis mode corresponding to the file, so that task information is obtained, and an asynchronous processing task is generated according to the task information.
In operation S240, asynchronous processing tasks are performed according to the task processing data model to obtain a processing result.
According to the embodiment of the disclosure, an asynchronous processing task is generated according to an asynchronous task instruction and an asynchronous task data model set corresponding to the asynchronous task instruction, and the asynchronous processing task is executed according to a task processing data model, so that a universal asynchronous processing method is realized.
FIG. 4 illustrates a flow chart for performing asynchronous processing tasks according to an embodiment of the present disclosure.
As shown in fig. 4, operation S240 may include operations S441 to S442, for example.
Specifically, in operation S441, the executor program is determined according to the processing mode identification, the processing program identification, and the processing parameters.
In operation S442, asynchronous processing tasks are performed using the executor program to obtain a processing result.
According to embodiments of the present disclosure, the processing mode identification may be used to represent a specific execution method of a task, such as JAVA, script, etc. The handler identification may be used to represent a processing class or a processing script name to be invoked. The processing parameters are relevant parameters required in the processing procedure, such as scheduling parameters, execution parameters, etc.
According to an embodiment of the present disclosure, an executor program (hereinafter, simply referred to as an executor) is a program for specifically executing a task. For example, JAVA conventional executors, scripting language executors, data lake executors, ELASTICSEARCH (search server for distributed full text retrieval) executors, and the like may be included.
It should be noted that the above actuators are merely examples, and other actuators may be included in practical applications. The present application is not particularly limited thereto.
Fig. 5 illustrates a flow chart of a method of data processing according to another embodiment of the present disclosure.
Illustratively, in this embodiment, the asynchronous task data model set may also include a task notification data model. The task notification data model may be used to specify notification templates and template parameters for the task processing results.
As shown in fig. 5, the method includes operations S550 to S560 in addition to S210 to S240.
In operation S550, a task result notification is generated according to the task notification data model.
According to embodiments of the present disclosure, task result notifications may be generated from the task results and the notification templates and template parameters in the task notification data model.
In operation S560, the task result notification is presented to the user.
According to the embodiment of the disclosure, the task result notification can be displayed in a short message, voice, mail or the like manner.
The asynchronous task data model set is further described below in connection with specific embodiments.
Illustratively, in this embodiment, the asynchronous task data model set includes a task submission data model and a task processing data model. In addition to the task submission data model and the task processing data model, the asynchronous task data model set may also include a task data model and a task notification data model.
According to embodiments of the present disclosure, a task submission data model may be used to specify the manner in which asynchronous task instructions are processed after task submission. The task submission data model may include, for example, configuration items such as submission style and model parameters. The submission mode configuration item may include, for example, a submission mode identifier, where the submission mode identifier may be used to describe a submission mode of the task, and the submission mode identifier may be, for example, an http request, an upload file, and the like. The commit mode configuration item supports expansion. The model parameter configuration items are related parameters customized by the processing mode. The model parameter configuration item may be in JSON format, for example.
According to embodiments of the present disclosure, a task processing data model may be used to specify the manner in which the parameters are processed after task submission. The task submission data model may include, for example, configuration items of processing manners, processing methods, model parameters, and the like. The processing mode configuration item supports extension. The processing mode configuration item may be used to specify a core processing method of asynchronous processing, such as JAVA, script, and the like.
According to embodiments of the present disclosure, a task data model may be used to record basic information of an asynchronous task. The task submission data model may include, for example, configuration items such as name, whether enabled, presenter/modifier, and model parameters. The name configuration item is the unique name of the asynchronous task and can be used for distinguishing different asynchronous export task types. Whether an enable switch whose configuration item is an asynchronous task is enabled may be used to indicate whether the asynchronous task supports processing. The presenter/modifier configuration item may be used to record operational user information. The model parameter configuration item can be used for recording flexible expansion parameters of asynchronous tasks, and the flexible expansion parameters can enable asynchronous processing to support specific functional implementation. The model parameter configuration item may be in JSON format, for example.
According to embodiments of the present disclosure, a task notification data model may be used to specify the notification manner and parameters of the task processing results. The task notification data model may include, for example, configuration items such as notification manners, notification templates, and model parameters. The notification mode configuration item may include, for example, a short message, a voice, a mail, and the like. The notification template configuration item includes, for example, a template of notification content. The notification manner configuration item supports extensions. The model parameter configuration items include, for example, custom replacement parameters in the template of the notification content, the custom replacement parameters being used to represent the location of placeholders in the notification template that will be replaced by the processing results in generating the notification. The model parameter configuration item may be in JSON format, for example.
According to the embodiment of the disclosure, the data model contained in the asynchronous task data model set can be newly added according to actual needs. The method of adding a new set of asynchronous task data models is further described below with reference to FIG. 6, in connection with an embodiment.
Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Step S101: the user newly adds a new asynchronous task model set and formulates the basic information thereof.
Step S102: the user customizes the submission mode and related parameters of the asynchronous task. In this embodiment, the basic submitting mode is divided into two types, one is to submit tasks through a server interface, for example, a user submits tasks in a management page, or an external system submits tasks through an interface, and the other is to submit tasks through an uploading file, for example, a system monitors a designated directory, and when the user uploads a file to the designated directory, the file uploaded by the user is resolved into a task to be processed.
Step S103: the user customizes the task scheduling processing mode, scheduling parameters, execution parameters, executors and other related parameters.
The scheduling parameters may include, for example, a start-stop time, a termination task execution method (an operation that needs to be performed before the task ends last, such as resource release or special logic, etc.), an abnormal retry number, an abnormal termination task execution method (an execution method after the retries all fail), a CRON (flexible configuration scheduling time for indicating that the task is executed in a loop in a specified time interval), and the like.
The execution parameters may include, for example, task priority (first-in scheduling with high priority), serial/parallel identification (to indicate the execution mode supported by the task).
The specific execution method of the executor for representing the task may include, for example, the following:
1) JAVA conventional executor. The JAVA conventional executor can be used for inheriting the abstract class, dynamically loading class files during execution, and flexibly replacing and updating.
2) A scripting language executor. Scripting languages have their own advantages over Java in special scenarios, for example, scripting language Python is more convenient in terms of data processing than Java.
3) A data lake executor. Under the scene of massive data, HQL sentences of the task can be configured to submit the task to a data lake, and the task is processed by depending on a big data tool, so that a processing result is obtained faster and better.
4) ELASTICSEARCH actuator. Many of the asynchronous tasks are data-class queries, which have limited efficiency through database queries, and ELASTICSEARCH executors query data through ELASTICSEARCH, which are faster and can reduce database pressure.
It should be noted that the above actuators are merely examples, and other actuators may be included in practical applications. The present application is not particularly limited thereto.
Step S104: the user customizes the task notification mode, the relevant templates and the replacement parameters.
Step S105: it is determined whether the submission mode customized in step S102 has been implemented in the system. If the submission mode customized in step S102 is not yet implemented in the system, step S106 is executed. If the submission mode customized in step S102 is already implemented in the system, step S107 is executed.
Step S106: and if the result of the last step is not, inheriting the abstract method for realizing task submission to support the submission mode.
Step S107: after the task submission mode has been implemented, it is determined whether the task processing mode customized in step S103 has been implemented. If the task processing mode customized in step S103 is not yet implemented in the system, step S108 is executed. If the task processing mode customized in step S103 is already implemented in the system, step S109 is executed.
Step S108: inherits and implements an abstract method of task processing to support the processing style.
Step S109: after the processing mode confirms that the notification mode and the related parameters customized in step S104 are implemented, it is determined whether the notification mode and the related parameters are implemented. If the notification mode and the related parameters customized in step S104 have not been implemented in the system, step S110 is performed. If the notification mode and the related parameters customized in step S104 are already implemented in the system, step S111 is executed.
Step S110: inherits the abstract method of implementing the notification style to support the notification style.
Step S111: the newly added asynchronous processing data model set is completed.
According to embodiments of the present disclosure, the asynchronous task data model set is laterally extensible and longitudinally configurable. Other extended asynchronous processing methods can be implemented by extending and configuring the asynchronous task data model set.
According to the embodiment of the disclosure, when an asynchronous task is newly added, the processing mode of each step is customized in a personalized way by constructing the asynchronous operation model, and the part needing special processing is inherited in the code. A large number of repeated developments can be dispensed with. With more and more extended parameters and processing modes, the asynchronous operation of subsequent access needs less and less codes to be realized by the asynchronous operation, and only the model needs to be simply modified.
The asynchronous task execution method is further described below with reference to fig. 7 to 8 in conjunction with the specific embodiment.
Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 7-8 are flowcharts illustrating an asynchronous task execution method according to another embodiment of the present disclosure. As shown in fig. 7 to 8, step S201: the user submits an asynchronous task in a customized manner at step S102.
According to the embodiment of the disclosure, in order to make task scheduling more flexible, a scheduling time management device may be added to schedule asynchronous tasks by means of CRON (scheduled task) management, start-stop time adjustment, and the like. For asynchronous tasks scheduled by the CRON, parsing the CRON and recording the next execution time of the start-stop time is also included in step S201.
Step S202: the system submits a new asynchronous task by calling a submitting mode realized by S106 through the model analysis device. More specifically, the submitting node submits the task, verifies the task, and records the task.
Step S203: the system internal timer schedules a task processing timer. More specifically, the execution timer of the scheduling node schedules, queries for eligible tasks, and submits execution of the tasks.
Step S204: and taking out the task to be processed meeting the current time through the scheduling time manager. And then sorting according to priorities and grouping according to tasks to be processed in serial or parallel according to a quick distribution algorithm (DEADLINE SENSITIVE FAST distribution, DSF), and acquiring an executor according to task types. For serial tasks, the next task is performed after the previous task is executed. For parallel tasks, the tasks are respectively submitted to an execution node on an IaaS (Infrastructure AS A SERVICE) platform, the execution node changes the state of the task into execution, and then the execution of the task is started by an asynchronous thread. The results are recorded after execution is completed.
According to the embodiment of the disclosure, by adding the IaaS platform for scheduling, multiple nodes are adopted for executing tasks, so that the execution of the multiple tasks is improved, and the elastic expandable capacity is effectively realized, so that the problems of high concurrency and high availability in scheduling are solved.
Step S205: the data processing is performed in accordance with the task processing manner customized by the asynchronous task at step S108.
Step S206: recording asynchronous processing results.
Step S207: the internal timer of the system schedules a task starting result notification timer.
Step S208: and taking out an asynchronous processing result to be notified.
Step S209: and analyzing the notification mode of the asynchronous type. And generating notification content according to the notification template.
Step S210: and calling an external interface to inform the user.
Step S211: and the user receives the notification result of the asynchronous processing, and the asynchronous processing is ended.
Fig. 9 schematically illustrates a block diagram of an apparatus for data processing according to an embodiment of the disclosure.
As shown in fig. 9, the apparatus 900 for data processing may include, for example, an acquisition module 910, a determination module 920, a submission module 930, and a processing module 940.
In particular, the acquisition module 910 may be used, for example, to acquire asynchronous task instructions submitted by a user.
The determining module 920 may be configured to determine, for example, an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, where the asynchronous task data model set includes a task submission data model and a task processing data model
The submitting module 930 may be configured to generate an asynchronous processing task according to the task submitting data model and the asynchronous task instruction, for example.
The processing module 940 may be configured to execute the asynchronous processing task according to the task processing data model, for example, to obtain a processing result.
According to an embodiment of the present disclosure, the task submission data model may include, for example: the commit mode identification and commit parameters. The submitting module 930 may include, for example: the first determining submodule is used for determining an analysis mode according to the submission mode identification and the submission parameters; the first analysis submodule is used for analyzing the asynchronous task instruction in the analysis mode to obtain task information; and the first generation sub-module is used for generating the asynchronous processing task according to the task information.
According to an embodiment of the present disclosure, a task processing data model may include, for example: a processing mode identification, a processing program identification and a processing parameter. The processing module 940 may include, for example: the second determining submodule is used for determining an executor program according to the processing mode identifier, the processing program identifier and the processing parameter; and the execution sub-module is used for executing the asynchronous processing task by utilizing the executor program so as to obtain a processing result.
According to the embodiment of the disclosure, an asynchronous processing task is generated according to an asynchronous task instruction and an asynchronous task data model set corresponding to the asynchronous task instruction, and the asynchronous processing task is executed according to a task processing data model, so that a universal asynchronous processing method is realized.
Fig. 10 schematically illustrates a block diagram of an apparatus for data processing according to an embodiment of the disclosure.
As shown in fig. 10, the apparatus 1000 for data processing may further include a notification module 1050 and a presentation module 1060 in addition to the acquisition module 910, the determination module 920, the submission module 930, and the processing module 940.
According to an embodiment of the present disclosure, the asynchronous task data model set may further include: the task notification data model. The notification module 1050 may be used, for example, to generate task result notifications according to the task notification data model.
Presentation module 1060 may be used, for example, to present the task result notification to the user.
According to an embodiment of the present disclosure, the task notification data model may include, for example: a notification template and notification parameters; the notification module 1050 may include, for example: a third determining submodule, configured to determine a target position in the notification template according to the notification parameter; and the writing sub-module is used for writing the task result notification into the target position in the notification template so as to obtain the task result notification.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Or one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which, when executed, may perform the corresponding functions.
For example, any of the acquisition module 910, determination module 920, submission module 930, and processing module 940, notification module 1050, and presentation module 1060 may be combined in one module to be implemented, or any of the modules may be split into multiple modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the acquisition module 910, the determination module 920, the submission module 930, and the processing module 940, the notification module 1050, and the presentation module 1060 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner that integrates or encapsulates the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Or at least one of the acquisition module 910, the determination module 920, the submission module 930, and the processing module 940, the notification module 1050, and the presentation module 1060 may be at least partially implemented as computer program modules that, when executed, perform the corresponding functions.
Fig. 11 schematically illustrates a block diagram of a computer system suitable for implementing the methods described above, according to an embodiment of the disclosure. The computer system illustrated in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 11, computer system 1100 includes a processor 1110 and a computer-readable storage medium 1120. The computer system 1100 may perform methods according to embodiments of the present disclosure.
In particular, processor 1110 can include, for example, a general-purpose microprocessor, an instruction set processor, and/or an associated chipset and/or special-purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 1110 may also include on-board memory for caching purposes. Processor 1110 may be a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the disclosure.
Computer-readable storage media 1120, which may be, for example, a non-volatile computer-readable storage medium, specific examples include, but are not limited to: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; etc.
The computer-readable storage medium 1120 may include a computer program 1121, which computer program 1121 may include code/computer-executable instructions that, when executed by the processor 1110, cause the processor 1110 to perform a method according to an embodiment of the present disclosure or any variation thereof.
The computer program 1121 may be configured with computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in the computer program 1121 may include one or more program modules, including 1121A, modules 1121B, … …, for example. It should be noted that the division and number of modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules depending on the actual situation, which when executed by the processor 1110, enable the processor 1110 to perform the method according to the embodiments of the present disclosure or any variations thereof.
At least one of the acquisition module 910, the determination module 920, the submission module 930, and the processing module 940, the notification module 1050, and the presentation module 1060 may be implemented as computer program modules as described with reference to fig. 11, which, when executed by the processor 1110, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (10)

1. A method of data processing, comprising:
Acquiring an asynchronous task instruction submitted by a user;
Determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, wherein the asynchronous task data model set comprises a task basic data model, a task submitting data model and a task processing data model;
Generating an asynchronous processing task according to the task submission data model and the asynchronous task instruction; and
Executing the asynchronous processing task according to the task processing data model to obtain a processing result;
Wherein, according to the asynchronous task instruction, determining an asynchronous task data model set corresponding to the asynchronous task instruction includes: comparing the task identification in the asynchronous task instruction with the identifications in the task basic data models; determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the comparison result;
The task submission data model includes: a submission mode identification and a submission parameter; the generating an asynchronous processing task according to the task submission data model and the asynchronous task instruction comprises the following steps: determining an analysis mode according to the submission mode identification and the submission parameters; analyzing the asynchronous task instruction in the analysis mode to obtain task information; and generating the asynchronous processing task according to the task information.
2. The method of claim 1, wherein the task processing data model comprises: a processing mode identifier, a processing program identifier and a processing parameter;
And executing the asynchronous processing task according to the task processing data model to obtain a processing result, wherein the processing result comprises the following steps:
determining an executor program according to the processing mode identifier, the processing program identifier and the processing parameters; and
And executing the asynchronous processing task by using the executor program to obtain a processing result.
3. The method of claim 1, wherein the asynchronous task data model set further comprises: a task notification data model;
the method further comprises;
generating a task result notification according to the task notification data model; and
And displaying the task result notification to the user.
4. A method according to claim 3, wherein the task notification data model comprises: a notification template and notification parameters;
the generating task result notification according to the task notification data model comprises the following steps:
Determining a target position in the notification template according to the notification parameters; and
And writing the task result notification into a target position in the notification template to obtain the task result notification.
5. An apparatus for data processing, comprising:
the acquisition module is used for acquiring an asynchronous task instruction submitted by a user;
The determining module is used for determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the asynchronous task instruction, wherein the asynchronous task data model set comprises a task basic data model, a task submitting data model and a task processing data model;
And the submitting module is used for generating an asynchronous processing task according to the task submitting data model and the asynchronous task instruction, wherein the task submitting data model comprises the following components: and identifying and submitting parameters according to the asynchronous task instruction, determining an asynchronous task data model set corresponding to the asynchronous task instruction, wherein the method comprises the following steps of: comparing the task identification in the asynchronous task instruction with the identifications in the task basic data models; determining an asynchronous task data model set corresponding to the asynchronous task instruction according to the comparison result; and
The processing module is used for executing the asynchronous processing task according to the task processing data model so as to obtain a processing result;
wherein, the submitting module includes:
The first determining submodule is used for determining an analysis mode according to the submission mode identification and the submission parameters;
The first analysis submodule is used for analyzing the asynchronous task instruction in the analysis mode to obtain task information; and
And the first generation sub-module is used for generating the asynchronous processing task according to the task information.
6. The apparatus of claim 5, wherein the task processing data model comprises: a processing mode identifier, a processing program identifier and a processing parameter;
The processing module comprises:
the second determining submodule is used for determining an executor program according to the processing mode identifier, the processing program identifier and the processing parameter; and
And the execution sub-module is used for executing the asynchronous processing task by utilizing the executor program so as to obtain a processing result.
7. The apparatus of claim 5, wherein the asynchronous task data model set further comprises: a task notification data model;
The apparatus further comprises;
the notification module is used for generating a task result notification according to the task notification data model; and
And the display module is used for displaying the task result notification to the user.
8. The apparatus of claim 7, wherein the task notification data model comprises: a notification template and notification parameters;
The notification module includes:
A third determining submodule, configured to determine a target position in the notification template according to the notification parameter; and
And the writing sub-module is used for writing the task result notification into the target position in the notification template so as to obtain the task result notification.
9. A computing device, comprising:
one or more processors;
a memory for storing one or more computer programs,
Wherein the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 4.
10. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any one of claims 1 to 4.
CN202010809981.XA 2020-08-12 2020-08-12 Data processing method, device, computing equipment and medium Active CN111813530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809981.XA CN111813530B (en) 2020-08-12 2020-08-12 Data processing method, device, computing equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809981.XA CN111813530B (en) 2020-08-12 2020-08-12 Data processing method, device, computing equipment and medium

Publications (2)

Publication Number Publication Date
CN111813530A CN111813530A (en) 2020-10-23
CN111813530B true CN111813530B (en) 2024-04-19

Family

ID=72860373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809981.XA Active CN111813530B (en) 2020-08-12 2020-08-12 Data processing method, device, computing equipment and medium

Country Status (1)

Country Link
CN (1) CN111813530B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458407A (en) * 2019-07-12 2019-11-15 平安普惠企业管理有限公司 A kind of method for allocating tasks, device and computer readable storage medium
CN111182061A (en) * 2019-12-30 2020-05-19 苏宁云计算有限公司 Task distribution processing method, system, computer device and storage medium
CN111309458A (en) * 2019-07-12 2020-06-19 北京关键科技股份有限公司 Asynchronous cooperative processing method for multi-node tasks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817532B2 (en) * 2018-05-10 2020-10-27 Shenzhen Jingtai Technology Co., Ltd. Scientific computing process management system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458407A (en) * 2019-07-12 2019-11-15 平安普惠企业管理有限公司 A kind of method for allocating tasks, device and computer readable storage medium
CN111309458A (en) * 2019-07-12 2020-06-19 北京关键科技股份有限公司 Asynchronous cooperative processing method for multi-node tasks
CN111182061A (en) * 2019-12-30 2020-05-19 苏宁云计算有限公司 Task distribution processing method, system, computer device and storage medium

Also Published As

Publication number Publication date
CN111813530A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN110286884B (en) Micro-service arrangement method, device, equipment and computer readable storage medium
CN108345977B (en) Service processing method and device
US10120658B2 (en) Method and system for realizing software development tasks
CN105589783A (en) Application program lag problem data obtaining method and device
US11003654B2 (en) Systems and methods for requesting, tracking and reporting modifications to a record
JP2015018295A (en) Application management device, application management system, and program
CN110427258B (en) Resource scheduling control method and device based on cloud platform
US9684689B2 (en) Distributed parallel processing system having jobs processed by nodes based on authentication using unique identification of data
CN110471754B (en) Data display method, device, equipment and storage medium in job scheduling
CN112581018B (en) Method, system, device and storage medium for managing process tasks
CN112559156B (en) Multi-dependent task grouping management method, device, equipment and storage medium
WO2015096661A1 (en) Project creation method and device based on configuration system, project test method and device, and background test method and device for configuration system
CN113672500B (en) Deep learning algorithm testing method and device, electronic device and storage medium
CN113835705B (en) Big data service product development method, device and system
CN113535326B (en) Calculation flow scheduling system based on high-throughput sequencing data
US8060885B2 (en) Creating task queries for concrete resources using alias selection fields specifying formal resources and formal relationships
CN111813530B (en) Data processing method, device, computing equipment and medium
CN110868309A (en) Method and device for processing resources in VNFM (virtual network configuration function) and computer storage medium
WO2023160418A1 (en) Resource processing method and resource scheduling method
US7127446B1 (en) File system based task queue management
CN113986642A (en) Task monitoring system, method and device, electronic equipment and storage medium
CN110968406B (en) Method, device, storage medium and processor for processing task
CN110888649B (en) Application deployment method, application baseline creation method and device
CN109960659B (en) Method and device for detecting application program
CN108154345B (en) Event scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant