CN111580939A - Method and device for hierarchical asynchronous transaction processing - Google Patents

Method and device for hierarchical asynchronous transaction processing Download PDF

Info

Publication number
CN111580939A
CN111580939A CN202010249946.7A CN202010249946A CN111580939A CN 111580939 A CN111580939 A CN 111580939A CN 202010249946 A CN202010249946 A CN 202010249946A CN 111580939 A CN111580939 A CN 111580939A
Authority
CN
China
Prior art keywords
transaction
subtasks
queue
subtask
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010249946.7A
Other languages
Chinese (zh)
Other versions
CN111580939B (en
Inventor
李传松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weimeng Chuangke Network Technology China Co Ltd
Original Assignee
Weimeng Chuangke Network Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weimeng Chuangke Network Technology China Co Ltd filed Critical Weimeng Chuangke Network Technology China Co Ltd
Priority to CN202010249946.7A priority Critical patent/CN111580939B/en
Publication of CN111580939A publication Critical patent/CN111580939A/en
Application granted granted Critical
Publication of CN111580939B publication Critical patent/CN111580939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing transactions in a hierarchical and asynchronous manner, wherein the method comprises the following steps: dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks which have priority processing marks and need to be processed in advance according to the internal business logic relationship and the priority processing marks in a front queue, and marking other subtasks in a rear queue; executing the subtasks in the pre-queue of the transaction; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction; after the subtasks in the front queue are processed, executing the subtasks in the rear queue of the transaction; and feeding back and displaying the execution results of all the subtasks of the transaction. The subtasks with high importance can be preferentially executed, so that the processing period of the whole transaction is reduced, and the processing efficiency of the transaction is improved.

Description

Method and device for hierarchical asynchronous transaction processing
Technical Field
The invention relates to the field of multitask transaction processing, in particular to a method and a device for processing transactions in a hierarchical and asynchronous mode.
Background
In the micro public welfare project, the user initiates a donation and starts a successful payment transaction after the payment is successful. The whole transaction needs to be divided into updating project contribution information, updating individual contribution information, updating list information, sharing microblogs, respectively giving private information notifications to the project belongers and the contributors, recording running water, updating order information and checking payment information. And after the payment callback, the transaction queue is entered, and the processing queue task receives the queue to start the processing of one transaction. The method or call interface for each class is executed in business order and in order of execution of the program code in dependence upon logic. And when the execution or the calling of a certain sub-module fails, the transaction is aborted to carry out the next queue transaction, if the execution is finished normally, the task is finally checked and checked, if an abnormal record is recorded and remarked, if the checking and checking are finished, the execution of the transaction is finished.
In the process of implementing the invention, the applicant finds that at least the following problems exist in the prior art:
in the whole execution process, strong-consistency dependent sequential execution is needed, and once one step fails to be executed, the whole transaction is aborted or considered to fail. All subtasks are completed by one-time execution flow, so that the whole transaction processing period is longer, and the incoming processing of the following transaction is influenced. Thereby resulting in queue accumulation and traffic anomalies.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing a transaction in a hierarchical asynchronous manner. Therefore, the processing period of the whole transaction is reduced, and the processing efficiency of the subsequent transaction is improved. And ensuring that the transaction queue can normally process to avoid accumulation.
To achieve the above object, in one aspect, an embodiment of the present invention provides a method for hierarchical asynchronous transaction processing, including:
dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks which have priority processing marks and need to be processed in advance according to the internal business logic relationship and the priority processing marks in a front queue, and marking other subtasks of the transaction in a rear queue;
traversing the subtasks of the affairs according to a task list, and executing the subtasks in the front queue of the affairs; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
after the sub tasks in the front queue of the transaction are processed, combining the parameters generated by executing the sub tasks in the front queue to execute the sub tasks in the rear queue of the transaction;
and feeding back and displaying the execution results of all the subtasks of the transaction.
On the other hand, an embodiment of the present invention further provides a device for hierarchical asynchronous transaction processing, including:
the subtask marking module is used for dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtask which has a priority processing mark and needs to be processed in advance according to the internal business logic relationship and the priority processing mark in the front queue, and marking other subtasks of the transaction in the rear queue;
the subtask priority processing module is used for traversing the subtasks of the affairs according to a task list and executing the subtasks in the front queue of the affairs; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
the subtask delay processing module is used for executing the subtasks in the post queue of the transaction by combining the parameters generated by the execution of the subtask rows in the pre-queue after the subtasks in the pre-queue of the transaction are processed;
and the transaction result feedback module is used for feeding back and displaying the execution results of all the subtasks of the transaction.
The technical scheme has the following beneficial effects: the business logic relationship is divided into a plurality of subtasks by one transaction, and the processing priority of the subtask module is adjusted in the transaction executing process when the load is high, so that the transaction can be executed preferentially with high importance, and some delay processing can be performed asynchronously again. Therefore, the processing period of the whole transaction is reduced, and the normal processing of the transaction queue is ensured to avoid accumulation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a hierarchical asynchronous transaction processing method implemented by the present invention;
FIG. 2 is a block diagram of a hierarchical asynchronous transaction device in which the present invention is implemented;
FIG. 3 is a schematic flow diagram of a hierarchical asynchronous transaction operation implemented by the present invention;
FIG. 4 is a flow diagram of the hierarchical asynchronous transaction setup logic implemented by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, in connection with an embodiment of the present invention, there is provided a method for hierarchical asynchronous processing of transactions, including:
s101: dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks which have priority processing marks and need to be processed in advance according to the internal business logic relationship and the priority processing marks in a front queue, and marking other subtasks of the transaction in a rear queue;
s102: traversing the subtasks of the affairs according to a task list, and executing the subtasks in the front queue of the affairs; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
s103: after the sub tasks in the front queue of the transaction are processed, combining the parameters generated by executing the sub tasks in the front queue to execute the sub tasks in the rear queue of the transaction;
s104: and feeding back and displaying the execution results of all the subtasks of the transaction.
Preferably, step 101 specifically includes:
distinguishing the internal business logic relationship of the subtasks of the affairs through the configuration files of the affairs, and marking the subtasks in a front queue or a rear queue through the subtask attributes in the configuration files; the subtask attributes in the configuration file include: and marking the processing level of each subtask, and processing the sequential relation between each subtask and other related subtasks according to the internal logic relation of the service.
Preferably, in step 102, after the sub-tasks in the pre-queue of the transaction are processed, the method further includes:
s1021: checking whether the subtask request in the front queue of the transaction fails or not, and adjusting and marking the subtask with failed request into the rear queue of the transaction;
s1022: checking whether the execution result of the subtask of the front queue of the transaction is correct, and marking the subtask with the wrong execution result into the rear queue of the transaction in an adjusting way.
Preferably, in step 103, when executing the subtask in the post queue of the transaction, the method further includes:
s1031: circularly checking whether the sub tasks in the post queue of the transaction fail to request, if so, executing the sub tasks again until all the sub tasks in the post queue of the transaction are executed or a set termination condition is met;
s1032: and circularly checking whether the execution result of the subtasks in the post queue of the transaction is correct or not, and executing the subtasks with wrong results again until all the subtasks in the post queue of the transaction are executed or a set termination condition is met.
Preferably, after all the subtasks in the post queue up to the transaction are executed or a set termination condition is met, the method further includes:
s105: checking whether all the subtasks in the post queue of the transaction are completely executed or meet a set termination condition, and feeding back the subtasks with wrong execution results or the subtasks with failed execution results to the manual work;
s106: and under the condition that the subtasks are executed again, manually putting the subtasks with wrong execution results or the subtasks with failed execution results into a post queue, and executing the subtasks with wrong execution results or the subtasks with failed execution results again.
Preferably, step 101 specifically includes:
when the number of all subtasks of the transaction is lower than a preset threshold value, all subtasks of the transaction are marked in the front queue.
As shown in fig. 2, in conjunction with the embodiment of the present invention, there is also provided an apparatus for hierarchical asynchronous transaction processing, including:
the subtask marking module 21 is configured to divide the transaction into a plurality of subtasks according to an internal business logic relationship of the transaction, mark a subtask having a priority processing flag and requiring prior processing according to the internal business logic relationship, in the pre-queue, and mark other subtasks of the transaction in the post-queue;
the subtask priority processing module 22 is configured to traverse the subtasks of the transaction according to the task list, and execute the subtasks in the pre-queue of the transaction; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
a subtask delay processing module 23, configured to, after the subtask in the pre-queue of the transaction is processed, execute the subtask in the post-queue of the transaction in combination with the parameter generated by the subtask row in the pre-queue;
and the transaction result feedback module 24 is used for feeding back and displaying the execution results of all the subtasks of the transaction.
Preferably, the subtask marking module 21 is specifically configured to:
distinguishing internal business logic relations of the subtasks of the affairs through the configuration file of the affairs, and setting the subtasks in the front queue and the subtasks in the rear queue through subtask attributes in the configuration file; the subtask attributes in the configuration file include: and marking the processing level of each subtask, and processing the sequential relation between each subtask and other related subtasks according to the internal logic relation of the service.
Preferably, the subtask priority processing module includes:
the first checking sub-module 221 is configured to check whether the sub-task request in the pre-queue of the transaction fails, and/or check whether an execution result of the sub-task in the pre-queue of the transaction is correct;
the subtask downgrading submodule 222 is configured to, when the first checking submodule 221 checks that a subtask request in the pre-queue of the transaction fails, mark a subtask adjustment that has failed to be requested in the post-queue of the transaction; and/or when the first checking submodule 221 checks that the execution result of the subtask in the pre-queue of the transaction is wrong, marking the adjustment of the subtask with the wrong execution result into the post-queue of the transaction.
Preferably, the subtask delay processing module includes:
the second checking sub-module 231 is configured to cyclically check whether the request of the sub-task in the post-queue of the transaction fails, and/or, whether the execution result of the sub-task in the post-queue of the transaction is correct;
a retry submodule 232, configured to, when the second checking submodule 231 circularly checks that the subtask request in the post-queue of the transaction fails, execute the subtask again until all the subtasks in the post-queue of the transaction are completely executed or a set termination condition is met;
the compensation submodule 233 is configured to, when the second checking submodule 231 circularly checks that the execution result of the subtask in the post-queue of the transaction is incorrect, execute the incorrect subtask again until all the subtasks in the post-queue of the transaction are executed or a set termination condition is met.
Preferably, the subtask delay processing module further includes:
a third checking submodule 25, configured to check that all the subtasks in the post-queue of the transaction are executed or meet a set termination condition after all the subtasks in the post-queue of the transaction are executed or meet the set termination condition; feeding back the subtask with wrong execution result or the subtask with failed execution result to the manual work;
and the manual inspection retry sub-module 26 is configured to, under the condition that the re-execution of the sub-task is satisfied, manually place the sub-task with the wrong execution result or the sub-task with the failed execution result into the post-queue, and re-execute the sub-task with the wrong execution result or the sub-task with the failed execution result.
Preferably, the subtask marking module 21 is specifically configured to:
and when all the number of the transactions is lower than a preset threshold value, marking all the subtasks of the transactions in the front queue.
The embodiment of the invention has the following beneficial effects:
1. and (4) grading: in asynchronous transaction processing, one transaction is disassembled into a plurality of subtask modules, parallel relation, dependency relation and sequence relation exist among the subtask modules, when the load is high, the subtask waits for adjusting the processing priority of the subtask modules in the execution process, the subtask with high importance can be executed preferentially, and some delay processing can be performed asynchronously again; therefore, the processing period of the whole transaction is reduced, and the normal processing of the transaction queue is ensured to avoid accumulation. The method avoids the problem that in the prior art, the whole transaction processing period is long and the subsequent transactions are influenced by the fact that all subtasks are completed by one-time execution flow at present. Thereby resulting in queue accumulation and traffic anomalies. When the processing capacity is larger, the subtask module cannot be executed in a grading way, and the defect of low real-time performance is overcome.
And in the flow case (number of transactions), the subtasks can be defined as pre-processing and post-processing according to the task attribute, and the change does not need to change the code.
2. Flexibility: after a plurality of tasks are decomposed according to the complex logic of the business logic transaction defined by the configuration file, the parallel, sequential and dependency relationship can be realized according to the configuration file. If the tasks are added or deleted, the flexibility in configuration file control is realized, and no code change is needed.
3. The method is stable and reliable: if the execution of a single subtask fails in the execution process, the execution of the following subtasks is not influenced, when other subtasks have dependency relations to the subtasks, all the subtasks of the executed transaction are automatically checked after the execution is finished in the front queue, the execution result is automatically checked after the execution of each subtask in the rear queue is finished, and the execution result can be put into the queue again for retry if the execution is not finished or the execution error is not correct. An upper limit of the number of retries may be set according to the traffic characteristics, and the transaction execution is considered to have failed when the upper limit is reached. A check is also needed for data consistency and completeness after the entire transaction is completed. If a certain subtask is found to fail to execute, compensation processing is needed. The method and the device avoid the defect that the normal phenomenon that the subtask cannot judge whether to be repeatedly executed or not even when the transaction fails to be executed again because the transaction which fails to execute the subtask appears in the middle in the prior art is regarded as a failure state.
4. And when compensation is needed after the problem is manually checked, directly pushing the transaction information to a post compensation queue to execute the failed subtask again. Until the last transaction terminates. Thus, the success rate of the work is greatly improved through a self-repairing and compensating mechanism.
The above technical solutions of the embodiments of the present invention are described in detail below with reference to specific application examples, and reference may be made to the foregoing related descriptions for technical details that are not described in the implementation process.
As shown in fig. 3 and 4, the present invention provides a hierarchical asynchronous transaction processing solution, and in a micro public service project, a user initiates a donation and opens a successful payment transaction after the payment is successful. The whole transaction needs to be divided into updating project contribution information, updating individual contribution information, updating list information, sharing microblogs, respectively giving private information notifications to the project belongers and the contributors, recording running water, updating order information and checking payment information. The updating project information and the updating personal contribution information are in a parallel relation, the updating order information and the updating project information are in a dependency relation, the recording flow, the updating project and the updating personal contribution information are in a sequential relation, the processing priority of the private letter notification is lower than that of other operations, and finally the payment information is checked to check all subtasks once.
After the user finishes payment on a payment platform (microblog wallet, WeChat, Paibao and the like), the payment platform calls back to finish related transactions until the completion of payment is notified. That is, in the asynchronous transaction processing of the present invention, a transaction may need to be disassembled into a plurality of subtask modules, each subtask module has a parallel relationship, a dependency relationship, and a sequential relationship, and when the processing amount is large, some hierarchical executions are required to be performed on the subtask modules, which requires a high real-time performance and a high importance, and some delay processing may be performed asynchronously again. The specific operation flow is as follows:
1. firstly, opening a transaction according to a pre-queue message, wherein the transmission units of the queue are transactions, namely one transaction corresponds to one queue, and each transaction comprises a plurality of tasks. When the processing capacity is large, the sub-task modules need to be executed in a certain grading way, the priority execution with high real-time performance and high importance is required, and the asynchronous delay processing can be performed again for some delay processing. Namely: dividing the affair into a plurality of subtasks according to the internal business logic relationship of the affair, marking the subtasks which have priority processing marks and need to be processed in advance according to the internal business logic relationship and the priority processing marks in the front queue, and marking other subtasks of the affair in the rear queue.
2. Reading the configuration file according to the service name of the transaction to be executed and processed to initialize the transaction execution information (such as the information in fig. 4), distinguishing the service logic relationship of the subtasks of the transaction through the configuration file of the transaction, and setting the subtasks in the front queue and the subtasks in the rear queue through the subtask attributes in the configuration file. Wherein, the configuration file is divided into two stages: the first level is the definition of the transaction, and the second level is the configuration of the subtasks under each transaction; transactions or subtasks can be added or deleted, and can be flexibly controlled by a configuration file without changing codes. Distinguishing the internal business logic relationship of the subtasks of the affairs through the configuration files of the affairs, and marking the subtasks in a front queue or a rear queue through the subtask attributes in the configuration files; the subtask attributes in the configuration file include: the processing level mark of each subtask indicates the processing order relationship between each subtask and other related subtasks according to the internal logic relationship of the service, and the processing level mark comprises a priority processing mark, a delay processing mark and a reprocessing mark; when the transaction is processed to execute the subtask, the processing level indication of the subtask comprises a priority processing indication and a delay processing indication; the subtasks marked by the priority processing are executed in the front queue, and the subtasks marked by the delay processing are executed in the rear queue. In the subtask processing process, when the subtasks need to be executed again due to various reasons such as execution failure, the subtasks are marked by the re-processing mark to indicate that the subtasks need to be executed again.
In the logic flow diagram of fig. 4, the global parameters include static configuration parameters and dynamic runtime parameters, wherein the static configuration parameters include:
pre _ mcq _ name: pre-queue name, post _ mcq _ name: post queue name, buss _ name, business name, process: a service processing method set, ispost: whether to delay processing, limit _ exec _ num: upper limit of execution times, limit _ exec _ time: an execution time upper bound. And, the dynamic runtime parameters include: param; class methods, variables, success; execution result, exec _ num; the number of executions, total _ exec _ time; total time of execution, last _ exec _ time; the last execution time, result; the result set of the execution.
In fig. 4, the global parameter, the delay queue name and the task list are all used for initial configuration; the task list comprises classes, namely method names, parameters and running states.
3. And sequentially executing the following steps according to the task list:
the execution order of the subtasks in the task list is specified in the configuration file, and the task list is used for describing which processing logic needs to be completed by the whole transaction.
After a transaction is taken, the queue can traverse one by one according to the task list information. In the front queue, whether each subtask runs or not is determined according to the configuration item and the execution condition in the task information in the configuration file.
According to the flow condition (the number of transactions), on the basis of the good determination of the logical relationship, the subtasks processed in the front queue and the rear queue can be defined according to the task attributes. For example, the number of transactions is large, some subtasks can be preferentially processed in the front queue, and other subtasks can be executed in the rear queue, so that the task execution number of the transactions is degraded, and the throughput of the front queue is increased. When the amount of all transactions is low, all subtasks of the transaction are marked in the front queue.
4 each time the task passes parameters dependent on other tasks according to the business logic, such as: execution times, total time consumption, last execution time and last execution result state; parameters for associated business acquisition dependent other tasks can be performed.
5. The task list is checked and written into the post queue depending on the execution status and whether or not to delay processing. The method specifically comprises the following steps:
the number of task lists and the initialization configuration of each transaction are unchanged, and the state information (whether the execution is successful, the execution times and the execution time) after each operation is changed; the condition for the delay processing judgment is that the configuration file is configured for each task.
The subtasks with priority processing in the transaction are executed in the front queue once, the subtasks without priority processing are subjected to degradation processing, and the subtasks with lower processing need to be completed in the rear queue.
In the pre-queue, the subtasks are traversed and executed, and after the execution is finished, the completion condition of the whole subtask needs to be checked so as to determine whether to rewrite some subtasks into the post-queue for delay processing (processing again), and the delay processing is also to write the whole transaction into the post-queue. The case of the delay processing (reprocessing) includes: (1) a subtask demoted in the pre-queue; and rewriting the subtasks which cannot be executed if the request fails in the front queue into the rear queue, and retrying the execution of the subtasks in the rear queue. (2) If the sub-tasks in the front queue are executed but the execution result is wrong, the transaction is rewritten into the back queue to execute compensation for the related task, and the sub-tasks in the back queue can be executed for multiple times, so that the retry and compensation of each sub-task of the transaction are realized in the back queue. The retry means that the subtask is executed again after time-out (request failure) and the compensation is that the execution result is wrong but needs to be executed again.
6. For the post queue, repeat steps 2-6, i.e.:
step 2: reading the configuration file according to the service name of the transaction to be executed and processed to initialize the transaction execution information (such as the information in fig. 4), distinguishing the service logic relationship of the subtasks of the transaction through the configuration file of the transaction, and setting the subtasks in the front queue and the subtasks in the rear queue through the subtask attributes in the configuration file.
And 3, step 3: and sequentially executing the subtasks of the transaction arranged in the post queue according to the task list.
And 4, step 4: each task passes parameters that other tasks depend on, such as: execution times, total time consumption, last execution time and last execution result state; parameters for associated business acquisition dependent other tasks can be performed.
And 5, step 5: the task list is checked and certain subtasks are written into the post queue again depending on the execution status of the subtasks and whether the processing is to be delayed. The method specifically comprises the following steps:
(1) when the second checking submodule circularly checks that the subtask request in the post queue of the transaction fails, writing the subtask with the failed request into the post queue again for retrying; retry indicates that the subtask execution times out (request fails) and then executes again,
(2) the second checking submodule is used for writing the subtask with the wrong execution result into the post queue again for compensation operation when the second checking submodule circularly checks that the execution result of the subtask in the post queue of the transaction is wrong; the compensation is executed again if there is an execution result but the execution result is wrong.
And (4) for the subtasks of the post queue, the steps 2-6 are circulated until the transaction is completed after all execution is checked or the termination condition is met, and the subtasks to be executed in the step 2-6 are all executed in the delay queue.
7. If the execution of the subtask fails or the execution result is wrong, manually checking the reason, wherein the failure reason includes various reasons, such as: data exceptions, program bugs, resource loads, network reasons, etc.; the reason of failure is solved, and then the post queue is carried out in the resource idle period, so that the integrity of the transaction is ensured. For example, when the execution times of the retries exceed the upper limit and the execution of the transaction fails, the transaction is fed back to the manual work for manual investigation, and if the transaction has a resource load problem at that time, the upper limit of the times can be modified and the transaction can be re-executed (re-processed) in the post queue in an idle period.
The beneficial effects obtained by the invention are as follows:
the method mainly realizes the hierarchical processing of the transaction (one transaction comprises a plurality of subtasks) by establishing the front queue and the rear queue, and realizes the integrity of the transaction execution through the configurable task list. The functions of retry, compensation, fault tolerance, inspection and degradation of the transaction are also compatible through the post queue, one complex transaction can degrade a plurality of tasks to be flexibly configured and executed, and the success rate and the service availability are improved.
1. And (4) grading: in asynchronous transaction processing, one transaction is disassembled into a plurality of subtask modules, parallel relation, dependency relation and sequence relation exist among the subtask modules, when the load is high, the subtask waits for adjusting the processing priority of the subtask modules in the execution process, the subtask with high importance can be executed preferentially, and some delay processing can be performed asynchronously again; therefore, the processing period of the whole transaction is reduced, and the normal processing of the transaction queue is ensured to avoid accumulation. The method avoids the problem that in the prior art, the whole transaction processing period is long and the subsequent transactions are influenced by the fact that all subtasks are completed by one-time execution flow at present. Thereby resulting in queue accumulation and traffic anomalies. When the processing capacity is larger, the subtask module cannot be executed in a grading way, and the defect of low real-time performance is overcome.
And in the flow case (number of transactions), the subtasks can be defined as pre-processing and post-processing according to the task attribute, and the change does not need to change the code.
2. Flexibility: after a plurality of tasks are decomposed according to the complex logic of the business logic transaction defined by the configuration file, the parallel, sequential and dependency relationship can be realized according to the configuration file. If the tasks are added or deleted, the flexibility in configuration file control is realized, and no code change is needed.
3. The method is stable and reliable: if the execution of a single subtask fails in the execution process, the execution of the following subtasks is not influenced, when other subtasks have dependency relations to the subtasks, all the subtasks of the executed transaction are automatically checked after the execution is finished in the front queue, the execution result is automatically checked after the execution of each subtask in the rear queue is finished, and the execution result can be put into the queue again for retry if the execution is not finished or the execution error is not correct. An upper limit of the number of retries may be set according to the traffic characteristics, and the transaction execution is considered to have failed when the upper limit is reached. A check is also needed for data consistency and completeness after the entire transaction is completed. If a certain subtask is found to fail to execute, compensation processing is needed. The method and the device avoid the defect that the normal phenomenon that the subtask cannot judge whether to be repeatedly executed or not even when the transaction fails to be executed again because the transaction which fails to execute the subtask appears in the middle in the prior art is regarded as a failure state.
4. And when compensation is needed after the problem is manually checked, directly pushing the transaction information to a post compensation queue to execute the failed subtask again. Until the last transaction terminates. Thus, the success rate of the work is greatly improved through a self-repairing and compensating mechanism.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A method for hierarchical asynchronous processing of transactions, comprising:
dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks which have priority processing marks and need to be processed in advance according to the internal business logic relationship and the priority processing marks in a front queue, and marking other subtasks of the transaction in a rear queue;
traversing the subtasks of the affairs according to a task list, and executing the subtasks in the front queue of the affairs; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
after the sub tasks in the front queue of the transaction are processed, combining the parameters generated by executing the sub tasks in the front queue to execute the sub tasks in the rear queue of the transaction;
and feeding back and displaying the execution results of all the subtasks of the transaction.
2. The method according to claim 1, wherein the dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks that have the priority processing indication and need to be processed before according to the internal business logic relationship with the priority processing indication in the front queue, and marking the other subtasks of the transaction in the back queue specifically comprises:
distinguishing the internal business logic relationship of the subtasks of the affairs through the configuration files of the affairs, and marking the subtasks in a front queue or a rear queue through the subtask attributes in the configuration files; the subtask attributes in the configuration file include: and marking the processing level of each subtask, and processing the sequential relation between each subtask and other related subtasks according to the internal logic relation of the service.
3. The method of claim 1, wherein after the sub-tasks in the pre-queue of transactions are processed, the method further comprises:
checking whether the subtask request in the front queue of the transaction fails or not, and adjusting and marking the subtask with failed request into the rear queue of the transaction;
and/or the presence of a gas in the gas,
checking whether the execution result of the subtask of the front queue of the transaction is correct, and marking the subtask with the wrong execution result into the rear queue of the transaction in an adjusting way.
4. The method of hierarchical asynchronous processing of transactions according to claim 1, further comprising, when executing a subtask within a post queue of said transaction:
circularly checking whether the sub tasks in the post queue of the transaction fail to request, if so, executing the sub tasks again until all the sub tasks in the post queue of the transaction are executed or a set termination condition is met;
and/or the presence of a gas in the gas,
and circularly checking whether the execution result of the subtasks in the post queue of the transaction is correct or not, and executing the subtasks with wrong results again until all the subtasks in the post queue of the transaction are executed or a set termination condition is met.
5. The method for hierarchical asynchronous processing of transactions according to claim 4, further comprising, after said sub-tasks in the post-queue up to the transaction are all executed or a set termination condition is met:
checking whether all the subtasks in the post queue of the transaction are completely executed or meet a set termination condition, and feeding back the subtasks with wrong execution results or the subtasks with failed execution results to the manual work;
and under the condition that the subtasks are executed again, manually putting the subtasks with wrong execution results or the subtasks with failed execution results into a post queue, and executing the subtasks with wrong execution results or the subtasks with failed execution results again.
6. The method according to claim 1, wherein the dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtasks that have the priority processing indication and need to be processed before according to the internal business logic relationship with the priority processing indication in the front queue, and marking the other subtasks of the transaction in the back queue specifically comprises:
when the number of all subtasks of the transaction is lower than a preset threshold value, all subtasks of the transaction are marked in the front queue.
7. An apparatus for hierarchical asynchronous processing of transactions, comprising:
the subtask marking module is used for dividing the transaction into a plurality of subtasks according to the internal business logic relationship of the transaction, marking the subtask which has a priority processing mark and needs to be processed in advance according to the internal business logic relationship and the priority processing mark in the front queue, and marking other subtasks of the transaction in the rear queue;
the subtask priority processing module is used for traversing the subtasks of the affairs according to a task list and executing the subtasks in the front queue of the affairs; transmitting parameters generated by executing the subtasks in the front queue to other related subtasks according to the internal business logic relationship of the transaction;
the subtask delay processing module is used for executing the subtasks in the post queue of the transaction by combining the parameters generated by the execution of the subtask rows in the pre-queue after the subtasks in the pre-queue of the transaction are processed;
and the transaction result feedback module is used for feeding back and displaying the execution results of all the subtasks of the transaction.
8. The apparatus for hierarchical asynchronous processing of transactions according to claim 7, wherein said subtask marking module is specifically configured to:
distinguishing internal business logic relations of the subtasks of the affairs through the configuration file of the affairs, and setting the subtasks in the front queue and the subtasks in the rear queue through subtask attributes in the configuration file; the subtask attributes in the configuration file include: and marking the processing level of each subtask, and processing the sequential relation between each subtask and other related subtasks according to the internal logic relation of the service.
9. The apparatus for hierarchical asynchronous processing of transactions according to claim 7, wherein said subtask prioritization module comprises:
the first checking submodule is used for checking whether the subtask request in the front queue of the transaction fails and/or checking whether the execution result of the subtask in the front queue of the transaction is correct;
the subtask degradation submodule is used for adjusting and marking the subtask with failed request into the post queue of the transaction when the first checking submodule checks that the subtask in the pre-queue of the transaction fails to request; and/or when the first checking sub-module checks that the execution result of the sub-task in the front queue of the transaction is wrong, marking the adjustment of the sub-task with the wrong execution result into the rear queue of the transaction.
10. The apparatus for hierarchical asynchronous processing of transactions according to claim 7, wherein said subtask delay processing module comprises:
the second checking submodule is used for circularly checking whether the sub-tasks in the post queue of the transaction fail to request or not and/or circularly checking whether the execution result of the sub-tasks in the post queue of the transaction is correct or not;
the retry submodule is used for executing the subtasks again when the second checking submodule circularly checks that the subtask request in the post queue of the transaction fails until all the subtasks in the post queue of the transaction are executed or a set termination condition is met;
and the compensation submodule is used for executing the wrong-result-holding subtask again when the second checking submodule circularly checks that the execution result of the subtask in the post queue of the transaction is wrong until all the subtasks in the post queue of the transaction are executed or a set termination condition is met.
11. The apparatus for hierarchical asynchronous processing of transactions according to claim 10, wherein said subtask delay processing module further comprises:
a third checking sub-module, configured to check that all the sub-tasks in the post-queue of the transaction are executed or meet a set termination condition after all the sub-tasks in the post-queue of the transaction are executed or meet the set termination condition; feeding back the subtask with wrong execution result or the subtask with failed execution result to the manual work;
and the manual inspection retry sub-module is used for manually putting the subtask with the wrong execution result or the subtask with the failed execution result into the post-queue under the condition that the subtask is executed again, and executing the subtask with the wrong execution result or the subtask with the failed execution result again.
12. The apparatus for hierarchical asynchronous processing of transactions according to claim 7, wherein said subtask marking module is specifically configured to:
and when all the number of the transactions is lower than a preset threshold value, marking all the subtasks of the transactions in the front queue.
CN202010249946.7A 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode Active CN111580939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249946.7A CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249946.7A CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Publications (2)

Publication Number Publication Date
CN111580939A true CN111580939A (en) 2020-08-25
CN111580939B CN111580939B (en) 2023-09-01

Family

ID=72126100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249946.7A Active CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Country Status (1)

Country Link
CN (1) CN111580939B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817720A (en) * 2021-01-30 2021-05-18 北京奇保信安科技有限公司 Visual workflow scheduling method and device and electronic equipment
CN112988428A (en) * 2021-04-26 2021-06-18 南京蜂泰互联网科技有限公司 Distributed message asynchronous notification middleware implementation method and system
CN113592228A (en) * 2021-06-29 2021-11-02 中国红十字基金会 Red balloon race management system
CN114529301A (en) * 2022-02-21 2022-05-24 山东浪潮通软信息科技有限公司 Voucher posting method, device, equipment and medium
CN115601195A (en) * 2022-10-17 2023-01-13 桂林电子科技大学(Cn) Transaction bidirectional recommendation system and method based on real-time label of power user

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN101882161A (en) * 2010-06-23 2010-11-10 中国工商银行股份有限公司 Application level asynchronous task scheduling system and method
CN102508716A (en) * 2011-09-29 2012-06-20 用友软件股份有限公司 Task control device and task control method
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN104158699A (en) * 2014-08-08 2014-11-19 广州新科佳都科技有限公司 Data acquisition method based on priority and segmentation
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue
WO2018015965A1 (en) * 2016-07-19 2018-01-25 Minacs Private Limited System and method for efficiently processing transactions by automating resource allocation
US20180321945A1 (en) * 2017-03-24 2018-11-08 Western Digital Technologies, Inc. System and method for processing and arbitrating submission and completion queues
CN109558237A (en) * 2017-09-27 2019-04-02 北京国双科技有限公司 A kind of task status management method and device
CN109660612A (en) * 2018-12-11 2019-04-19 北京潘达互娱科技有限公司 A kind of request processing method and server
CN109885382A (en) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 The system of cross-system distributed transaction processing method and distributing real time system
CN109933611A (en) * 2019-02-22 2019-06-25 深圳达普信科技有限公司 A kind of adaptive collecting method and system
CN110046041A (en) * 2019-04-15 2019-07-23 北京中安智达科技有限公司 A kind of collecting method based on celery Scheduling Framework
CN110221927A (en) * 2019-06-03 2019-09-10 中国工商银行股份有限公司 Asynchronous message processing method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN101882161A (en) * 2010-06-23 2010-11-10 中国工商银行股份有限公司 Application level asynchronous task scheduling system and method
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN102508716A (en) * 2011-09-29 2012-06-20 用友软件股份有限公司 Task control device and task control method
CN104158699A (en) * 2014-08-08 2014-11-19 广州新科佳都科技有限公司 Data acquisition method based on priority and segmentation
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue
WO2018015965A1 (en) * 2016-07-19 2018-01-25 Minacs Private Limited System and method for efficiently processing transactions by automating resource allocation
US20180321945A1 (en) * 2017-03-24 2018-11-08 Western Digital Technologies, Inc. System and method for processing and arbitrating submission and completion queues
CN109558237A (en) * 2017-09-27 2019-04-02 北京国双科技有限公司 A kind of task status management method and device
CN109660612A (en) * 2018-12-11 2019-04-19 北京潘达互娱科技有限公司 A kind of request processing method and server
CN109885382A (en) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 The system of cross-system distributed transaction processing method and distributing real time system
CN109933611A (en) * 2019-02-22 2019-06-25 深圳达普信科技有限公司 A kind of adaptive collecting method and system
CN110046041A (en) * 2019-04-15 2019-07-23 北京中安智达科技有限公司 A kind of collecting method based on celery Scheduling Framework
CN110221927A (en) * 2019-06-03 2019-09-10 中国工商银行股份有限公司 Asynchronous message processing method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817720A (en) * 2021-01-30 2021-05-18 北京奇保信安科技有限公司 Visual workflow scheduling method and device and electronic equipment
CN112988428A (en) * 2021-04-26 2021-06-18 南京蜂泰互联网科技有限公司 Distributed message asynchronous notification middleware implementation method and system
CN113592228A (en) * 2021-06-29 2021-11-02 中国红十字基金会 Red balloon race management system
CN114529301A (en) * 2022-02-21 2022-05-24 山东浪潮通软信息科技有限公司 Voucher posting method, device, equipment and medium
CN115601195A (en) * 2022-10-17 2023-01-13 桂林电子科技大学(Cn) Transaction bidirectional recommendation system and method based on real-time label of power user
CN115601195B (en) * 2022-10-17 2023-09-08 桂林电子科技大学 Transaction bidirectional recommendation system and method based on real-time label of power user

Also Published As

Publication number Publication date
CN111580939B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN111580939A (en) Method and device for hierarchical asynchronous transaction processing
US20190332502A1 (en) Method, device and computer program product for managing storage system
US8938421B2 (en) Method and a system for synchronizing data
US9535754B1 (en) Dynamic provisioning of computing resources
US20080313502A1 (en) Systems, methods and computer products for trace capability per work unit
US20110179398A1 (en) Systems and methods for per-action compiling in contact handling systems
US20110178946A1 (en) Systems and methods for redundancy using snapshots and check pointing in contact handling systems
CN106557470B (en) Data extraction method and device
US20110179304A1 (en) Systems and methods for multi-tenancy in contact handling systems
CN109634989B (en) HIVE task execution engine selection method and system
CN110647460B (en) Test resource management method and device and test client
CN111400011A (en) Real-time task scheduling method, system, equipment and readable storage medium
US11086696B2 (en) Parallel cloned workflow execution
CN113448826A (en) Software automation test system and method
CN116680055A (en) Asynchronous task processing method and device, computer equipment and storage medium
US20080294670A1 (en) Method and system for hierarchical logging
CN111913804A (en) Pre-visit report generation method and device, electronic equipment and storage medium
CN110764835A (en) File configuration method and device of application environment, computer equipment and storage medium
CN113342512B (en) IO task silencing and driving method and device and related equipment
CN110716972A (en) Method and device for processing error of high-frequency calling external interface
US11112970B2 (en) Software system logging based on runtime analysis
CN116149707B (en) Method and device for detecting and avoiding upgrading risk of distributed system
US20240126660A1 (en) File-based asynchronous and failsafe execution in cloud
CN114610735A (en) Data consistency checking method, device, equipment and storage medium
CN115827050A (en) Data calling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant