CN115237576A - Serial-parallel hybrid processing method, device, equipment and medium - Google Patents

Serial-parallel hybrid processing method, device, equipment and medium Download PDF

Info

Publication number
CN115237576A
CN115237576A CN202210957421.8A CN202210957421A CN115237576A CN 115237576 A CN115237576 A CN 115237576A CN 202210957421 A CN202210957421 A CN 202210957421A CN 115237576 A CN115237576 A CN 115237576A
Authority
CN
China
Prior art keywords
serial
processing
parallel
processed
subtask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210957421.8A
Other languages
Chinese (zh)
Inventor
卞嘉骏
唐成山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202210957421.8A priority Critical patent/CN115237576A/en
Publication of CN115237576A publication Critical patent/CN115237576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The application relates to the field of task scheduling, in particular to a serial-parallel hybrid processing method, a device, equipment and a medium, which are used for solving the problem that processing of each sub-task under a main task can only be processed in a full serial mode or in a full parallel mode. The method divides a main task to be processed into a plurality of subtasks according to the type of a service scene, further determines a target processing strategy corresponding to each subtask, puts each subtask belonging to a parallel processing strategy into different parallel fragments, and puts each subtask belonging to a serial processing strategy into different serial fragments, thereby carrying out parallel processing on all parallel fragments and all serial fragments, and carrying out parallel processing on each subtask in each parallel fragment and serial processing on each subtask in the same serial fragment, thereby realizing serial-parallel hybrid processing on each subtask under the same main task, further saving processing time and improving processing efficiency.

Description

Serial-parallel hybrid processing method, device, equipment and medium
Technical Field
The present application relates to the field of task scheduling, and in particular, to a serial-parallel hybrid processing method, apparatus, device, and medium.
Background
The computer respectively processes the tasks in parallel and in sequence, and the parallel processing is generally managed by a thread pool, namely all the tasks to be executed are totally lost into the thread pool to realize concurrence, so that the tasks can be quickly processed; the sequential processing is realized through thread management, namely only one thread processing task is opened, and the next task can be performed only when one task is completed.
At present, the processing of each sub task under one main task only has two modes of all serial processing or all parallel processing.
Disclosure of Invention
Embodiments of the present application provide a serial-parallel hybrid processing method, apparatus, device, and medium, which are used to solve a problem that processing of each sub task under one main task can only be performed by all serial processing or all parallel processing.
In a first aspect, a serial-parallel hybrid processing method includes:
dividing a main task to be processed into a plurality of subtasks based on the service scene type, wherein the type of the main task to be processed is a batch type to be paid;
determining a target processing strategy corresponding to each subtask based on the corresponding relation between the preset service scene type and the processing strategy;
respectively putting each subtask of the target processing strategy belonging to the parallel processing strategy into different parallel fragments;
respectively putting each subtask of the target processing strategy belonging to the serial processing strategy into different serial fragments;
and performing parallel processing on each parallel fragment and each serial fragment, performing parallel processing on each subtask in each parallel fragment, and performing serial processing on each subtask in the same serial fragment.
In the embodiment of the application, a main task to be processed is divided into a plurality of subtasks according to the service scene type, a target processing strategy corresponding to each subtask is further determined, on the basis, each subtask belonging to a parallel processing strategy is respectively placed into different parallel fragments, each subtask belonging to a serial processing strategy is respectively placed into different serial fragments, so that all parallel fragments and all serial fragments are processed in parallel, each subtask in each parallel fragment is processed in parallel, each subtask in the same serial fragment is processed in series, serial-parallel mixed processing of each subtask under the same main task is realized, processing time of batch subtasks is saved, and processing efficiency of batch subtasks is improved.
In one possible embodiment, it is determined whether the type of the pending primary task is a batch to receive type by:
extracting corresponding account information from each item to be processed included in the main task to be processed, wherein the total amount of each item to be processed exceeds a first preset threshold;
and if the payment types corresponding to the account information are not consistent, determining that the type of the main task to be processed is a batch type to be paid, wherein the payment type comprises payment and collection.
In a possible embodiment, dividing the main task to be processed into a plurality of subtasks based on the service scenario type includes:
respectively determining the service scene type corresponding to each item to be processed based on the account information and the payment type corresponding to each item to be processed in the main task to be processed;
and dividing the to-be-processed items belonging to the same service scene type into the same subtasks, wherein the service scene types are various.
In a possible embodiment, putting the sub-tasks whose target processing strategies belong to the parallel processing strategies into different parallel fragments respectively includes:
summarizing the subtasks of which the target processing strategy belongs to the parallel processing strategy based on the execution sequence of each subtask to obtain the total number of the subtasks needing parallel processing;
and respectively putting the aggregated subtasks into different parallel fragments based on the capacity and the total number of the parallel fragments.
In a possible embodiment, the placing the respective subtasks whose target processing strategies belong to the serial processing strategy into different serial slices respectively includes:
summarizing each subtask of the target processing strategy belonging to the serial processing strategy according to different account information to obtain a plurality of summarized serial groups;
and respectively determining the serial fragments corresponding to the serial groups, and respectively putting the subtasks in different serial groups into the corresponding serial fragments.
In a possible embodiment, the placing the subtasks in different serial packets into the corresponding serial slices respectively includes:
if the total number of the subtasks needing serial processing exceeds the target serial grouping of the capacity of the corresponding serial fragments, combining at least two serial fragments with the total number of the subtasks being zero into one summary serial fragment, and putting the subtasks in the target serial grouping into the summary serial fragment.
In a second aspect, the present application provides a serial-parallel hybrid processing apparatus, the apparatus comprising:
the dividing module is used for dividing the main task to be processed into a plurality of subtasks based on the service scene type, wherein the type of the main task to be processed is a batch type to be paid;
the determining module is used for determining a target processing strategy corresponding to each subtask based on the corresponding relation between the preset service scene type and the processing strategy;
the first putting-in module is used for respectively putting each subtask of the target processing strategy belonging to the parallel processing strategy into different parallel fragments;
the second putting module is used for respectively putting each subtask of the target processing strategy belonging to the serial processing strategy into different serial fragments;
and the processing module is used for carrying out parallel processing on each parallel fragment and each serial fragment, carrying out parallel processing on each subtask in each parallel fragment, and carrying out serial processing on each subtask in the same serial fragment.
In one possible embodiment, it is determined whether the type of the pending primary task is a bulk-to-pay type by:
extracting corresponding account information from each item to be processed included in the main task to be processed, wherein the total amount of each item to be processed exceeds a first preset threshold;
and if the payment types corresponding to the account information are not consistent, determining that the type of the main task to be processed is a batch payment type to be received, wherein the payment type comprises payment and collection.
In a possible embodiment, the to-be-processed main task is divided into a plurality of sub-tasks based on the service scenario type, and the dividing module is configured to:
respectively determining the service scene type corresponding to each item to be processed based on the account information and the payment type corresponding to each item to be processed in the main task to be processed;
and dividing the items to be processed belonging to the same service scene type into the same subtasks, wherein the service scene types are various.
In a possible embodiment, the sub-tasks whose target processing strategies belong to the parallel processing strategies are respectively placed into different parallel fragments, and the first placing module is configured to:
summarizing each subtask of which the target processing strategy belongs to the parallel processing strategy based on the execution sequence of each subtask to obtain the total number of the subtasks needing parallel processing;
and respectively putting each aggregated subtask into different parallel fragments based on the capacity and the total number of the parallel fragments.
In a possible embodiment, the subtasks whose target processing strategies belong to the serial processing strategies are respectively put into different serial slices, and the second putting module is configured to:
summarizing each subtask of the target processing strategy belonging to the serial processing strategy according to different account information to obtain a plurality of summarized serial groups;
and respectively determining the serial fragments corresponding to the serial groups, and respectively putting the subtasks in different serial groups into the corresponding serial fragments.
In a possible embodiment, the subtasks in different serial packets are respectively put into the corresponding serial slices, and the second putting module is configured to:
if the total number of the subtasks needing serial processing exceeds the target serial grouping of the capacity of the corresponding serial fragments, combining at least two serial fragments with the total number of the subtasks being zero into one summary serial fragment, and putting the subtasks in the target serial grouping into the summary serial fragments.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps comprised in the method of any one of the first aspect according to the obtained program instructions.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method of any of the first aspects.
In a fifth aspect, the present application provides a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of any of the first aspects.
Drawings
Fig. 1 is a schematic diagram of a serial-parallel hybrid process according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a serial-parallel hybrid processing method according to an embodiment of the present disclosure;
fig. 3 is a flowchart for dividing a main task to be processed into a plurality of subtasks according to an embodiment of the present application;
fig. 4 is a flowchart of putting each sub-task belonging to a parallel processing policy into a parallel shard according to an embodiment of the present application;
fig. 5 is a flowchart for placing each subtask belonging to the serial processing policy into a serial slice according to an embodiment of the present application;
fig. 6 is a structural diagram of a serial-parallel hybrid processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof, which are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
In the technical scheme, the data acquisition, transmission, use and the like meet the requirements of relevant national laws and regulations.
Before describing a serial-parallel hybrid processing method provided by the embodiment of the present application, for convenience of understanding, the following detailed description is first made on the technical background of the embodiment of the present application.
In the process of processing tasks by a computer, the tasks are divided into two processing modes of parallel processing and sequential processing, generally, the parallel processing is managed by a thread pool, namely all the tasks to be executed are completely thrown into the thread pool to realize concurrency, so that the tasks can be quickly processed; the sequential processing is realized through thread management, namely only one thread processing task is opened, and the next task can be performed only when one task is completed. However, when the number of services is large, that is, when there are many tasks to be processed, the above-mentioned methods that can only process serially or only process in parallel have poor processing efficiency, and cannot efficiently and quickly complete the processing of all tasks.
In order to solve the above problem of poor efficiency in processing batch tasks, preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, in the embodiment of the present disclosure, a system includes at least one computer, in fig. 1, there may be a plurality of main tasks to be processed by the computer, and for a main task of a type in which a batch is to be received and paid, the computer puts a plurality of subtasks included in the main task into different parallel slices and serial slices respectively, so as to process the parallel slices and the serial slices simultaneously, which is described in detail below.
Referring to fig. 2, in the embodiment of the present disclosure, a specific flow of the serial-parallel hybrid processing method is as follows:
step 201: and dividing the main task to be processed into a plurality of subtasks based on the service scene type, wherein the type of the main task to be processed is a batch type to be paid.
It should be noted that, because the types of the to-be-processed main tasks are various, if the number of the to-be-processed items included in the to-be-processed main tasks is small, the to-be-processed items can be directly processed in a serial processing or parallel processing manner; if the number of the to-be-processed items included in the to-be-processed main task is large, the efficiency of processing the to-be-processed items by adopting a serial processing or parallel processing mode is poor, and therefore, the type of the to-be-processed main task needs to be judged first. Specifically, whether the type of the main task to be processed is a batch payment-to-receive type is determined by the following method:
(1) And extracting corresponding account information from each item to be processed included in the main task to be processed, wherein the total amount of each item to be processed exceeds a first preset threshold value.
In the implementation process, it is first required to determine whether the total number of the to-be-processed items included in the to-be-processed main task exceeds a first preset threshold, where a specific value of the first preset threshold may be flexibly set according to an actual usage scenario.
When it is determined that the total number of the items to be processed exceeds the first preset threshold, that is, when the main task to be processed is determined to be a batch-type processing task, the corresponding account information is further extracted from the items to be processed, for example, the format of the account information may be predefined, and the account information corresponding to the format may be extracted from each item to be processed during the execution.
(2) And if the payment types corresponding to the account information are not consistent, determining that the type of the main task to be processed is a batch payment type to be received, wherein the payment type comprises payment and collection.
After determining the account information corresponding to each to-be-processed item, further determining a payment type corresponding to the account information included in the corresponding to-be-processed item, that is, whether the account information is paid or received. In the implementation process, if the payment type corresponding to the same (or different) account information includes both payment and collection, the type of the main task to be processed is determined to be a batch payment-to-be-received type.
For example, the payment type corresponding to the account information Li Mou includes continuous payout, and the payment type corresponding to Li Mou also includes charging the fee from the account information Wang Mou, in this case, it is determined that the type of the main task to be processed is the batch payment type.
For another example, the account information includes continuous payouts for a corresponding payment type, and the account information Wang Mou includes charging for a corresponding payment type, which means that the type of the to-be-processed main task is determined to be a batch to-be-charged type.
After determining that the type of the to-be-processed main task is the batch to-be-paid type, dividing the to-be-processed main task into a plurality of subtasks based on the service scenario type, as shown in fig. 3, the method includes:
step 2011: and respectively determining the service scene type corresponding to each item to be processed based on the account information and the payment type corresponding to each item to be processed in the main task to be processed.
Considering that the number of backlogs included in the backlog main task is large, and the execution processes of some of the backlogs are similar, it is necessary to summarize such backlogs as described above. In the specific implementation process, the service scene type corresponding to each to-be-processed item is determined according to the account information and the payment type corresponding to each to-be-processed item, and the service scene types are various. It should be noted that the service scenario types are usually preset, for example, the service scenario types may be preset as type a, type B, type C, and type D.
For example, the account information corresponding to the to-be-processed item a is determined to be a1 and the payment type a2, and the service scenario type corresponding to the to-be-processed item a is determined to be a1+ a2, and if it is assumed that a1+ a2 is equivalent to type a, the service scenario type corresponding to the to-be-processed item a is set to be a.
For another example, it is determined that the account information corresponding to the to-be-processed item m is m1 and the payment type m2, and it is determined that the service scene type corresponding to the to-be-processed item m is m1+ m2, and assuming that m1+ m2 is equivalent to type a, the service scene type corresponding to the to-be-processed item m is set to a.
For another example, it is determined that the account information corresponding to the to-be-processed item B is B1 and the payment type B2, and it is determined that the service scenario type corresponding to the to-be-processed item B is B1+ B2, and if B1+ B2 is equivalent to type B, the service scenario type corresponding to the to-be-processed item B is set to B.
Step 2012: and dividing the to-be-processed items belonging to the same service scene type into the same subtasks.
In the implementation process, after the service scene type corresponding to each item to be processed is determined, the items to be processed belonging to the same service scene type are divided into the same sub-task, so that each item to be processed in the main task to be processed is divided into a plurality of sub-tasks according to the service scene type.
Still referring to the examples in step 2011, the to-be-processed item a and the to-be-processed item m are divided into the sub-task a, and the to-be-processed item B is divided into the sub-task B.
Step 202: and determining a target processing strategy corresponding to each subtask based on the corresponding relation between the preset service scene type and the processing strategy.
In the implementation process, after the divided multiple subtasks are obtained, whether the subtask belongs to a subtask requiring serial processing or a subtask requiring parallel processing is further determined. Specifically, according to the service scene type corresponding to the subtask, a corresponding processing policy is searched in a corresponding relationship between a preset service scene type and the processing policy, and the searched processing policy is determined as a target processing policy corresponding to the subtask.
It should be added that the processing strategies include a serial processing strategy and a parallel processing strategy, and a processing strategy corresponding to a certain service scene type is either the serial processing strategy or the parallel processing strategy.
After determining the target processing strategy corresponding to each subtask, respectively putting each subtask into different fragments for different processing according to whether the subtask is a serial processing strategy or a parallel processing strategy.
Step 203: and respectively putting each subtask of the target processing strategy belonging to the parallel processing strategy into different parallel fragments.
Because a plurality of fragments are stored in the database corresponding to each subtask, in order to improve the processing efficiency of each subtask, the fragments are divided into parallel fragments and serial fragments, wherein the parallel fragments are used for storing each subtask belonging to a parallel processing strategy, and the serial fragments are used for storing each subtask belonging to a serial processing strategy.
The following steps of putting different parallel pieces into the subtasks are introduced first, and as shown in fig. 4, the steps include:
step 2031: and summarizing the subtasks of which the target processing strategy belongs to the parallel processing strategy based on the execution sequence of each subtask to obtain the total number of the subtasks needing parallel processing.
Considering that different subtasks belonging to the parallel processing strategy may be linked, for this reason, an execution order of each subtask is determined, and the subtasks belonging to the parallel processing strategy of the target processing strategy are summarized according to the execution order, for example, the subtasks belonging to the parallel processing strategy are summarized into a list.
Further, the total number of the subtasks that need to be processed in parallel is obtained, for example, the total number is determined by obtaining the length of the linked list, that is, after summary, not only a plurality of subtasks arranged in sequence according to the execution order are obtained, but also the total number of the subtasks is determined.
Step 2032: and respectively putting the aggregated subtasks into different parallel fragments based on the capacity and the total number of the parallel fragments.
Generally, the capacity of the fragment is determined by the type of the computer, and therefore, in the implementation process, the capacity of the parallel fragments needs to be determined, so that the number of the parallel fragments needed to be used can be determined through the capacity and the total number, and then the aggregated subtasks are respectively placed into different parallel fragments, that is, the aggregated subtasks are segmented by the capacity of the parallel fragments, so that each aggregated subtask can be placed into and filled with one parallel fragment.
For example, the total number of the aggregated subtasks is 1000, and the capacity of the parallel shards is 100, so that the aggregated subtasks are divided into 10 segments, and each segment is put into one parallel shard, thereby obtaining 10 parallel shards.
Step 204: and respectively putting each subtask of the target processing strategy belonging to the serial processing strategy into different serial fragments.
The following describes the step of putting the subtasks into different serial slices, and with reference to fig. 5, the method includes:
step 2041: and summarizing the subtasks of the target processing strategy belonging to the serial processing strategy according to different account information to obtain a plurality of summarized serial groups.
In consideration of the fact that all subtasks belonging to the serial processing strategy need to be executed in sequence, specifically, all subtasks belonging to the same account information need to be executed in sequence according to the sequence. Therefore, in the implementation process, the subtasks of the target processing strategy belonging to the serial processing strategy need to be summarized respectively according to the difference of the account information, so that a plurality of serial groups after being summarized are obtained. The method is characterized in that the subtasks belonging to the serial processing strategy are divided into different serial groups, so that the subtasks in each serial group are executed in sequence, and different serial groups can be executed in parallel.
Step 2042: and respectively determining the serial fragments corresponding to the serial groups, and respectively putting the subtasks in different serial groups into the corresponding serial fragments.
In the implementation process, the serial fragments corresponding to the serial groups are determined respectively, and the determination principle can be flexibly set according to the use scene. And further, the subtasks in different serial groups are respectively put into the corresponding serial fragments, namely, a plurality of subtasks in the serial group corresponding to the same account information are put into one serial fragment.
In addition, it should be added that, since the capacity of the serial fragments is also limited, if it is monitored that the total number of the subtasks that need to be serially processed exceeds the target serial packet of the capacity of the corresponding serial fragment, at least two serial fragments whose total number of the subtasks is zero are merged into one summary serial fragment, and the subtasks in the target serial packet are put into the summary serial fragment.
Assuming that the total number of the subtasks requiring serial processing in the target serial packet 1 is 200, and the capacity of the serial fragments is 100, in this case, at least two serial fragments that have not stored a subtask are merged to obtain one summarized serial fragment, and the new capacity of the summarized serial fragment is 200, then all the subtasks requiring serial processing in the target serial packet 1 are placed in the summarized serial fragment.
Step 205: and performing parallel processing on each parallel fragment and each serial fragment, performing parallel processing on each subtask in each parallel fragment, and performing serial processing on each subtask in the same serial fragment.
In the implementation process, after each subtask is placed in each parallel fragment and each serial fragment, parallel processing is performed between each parallel fragment and each serial fragment, for example, a plurality of processes are established, each process corresponds to one parallel fragment or one serial fragment, and the plurality of processes are started to start processing each fragment. Specifically, each subtask in each parallel slice is processed in parallel, and each subtask in the same serial slice is processed in series. For example, a plurality of threads are established, each thread corresponds to each subtask in one parallel fragment or a first subtask in one serial fragment, and the threads are started to start processing each subtask.
Based on the same inventive concept, an embodiment of the present application provides a serial-parallel hybrid processing apparatus, as shown in fig. 6, the apparatus includes:
the dividing module 601 is configured to divide a to-be-processed main task into a plurality of subtasks based on a service scene type, where the type of the to-be-processed main task is a batch to-be-paid type;
a determining module 602, configured to determine a target processing policy corresponding to each sub-task based on a corresponding relationship between a preset service scene type and a processing policy;
a first putting module 603, configured to put each sub-task of the target processing policy belonging to the parallel processing policy into different parallel fragments respectively;
a second putting module 604, configured to put each subtask of the target processing policy that belongs to the serial processing policy into different serial slices respectively;
the processing module 605 is configured to perform parallel processing on each parallel segment and each serial segment, perform parallel processing on each subtask in each parallel segment, and perform serial processing on each subtask in the same serial segment.
In one possible embodiment, it is determined whether the type of the pending primary task is a batch to receive type by:
extracting corresponding account information from each item to be processed included in the main task to be processed, wherein the total amount of each item to be processed exceeds a first preset threshold;
and if the payment types corresponding to the account information are not consistent, determining that the type of the main task to be processed is a batch payment type to be received, wherein the payment type comprises payment and collection.
In a possible embodiment, the to-be-processed main task is divided into a plurality of sub-tasks based on the service scenario type, and the dividing module 601 is configured to:
respectively determining the service scene type corresponding to each item to be processed based on the account information and the payment type corresponding to each item to be processed in the main task to be processed;
and dividing the items to be processed belonging to the same service scene type into the same subtasks, wherein the service scene types are various.
In a possible embodiment, the sub-tasks whose target processing policies belong to the parallel processing policies are respectively placed into different parallel fragments, and the first placing module 603 is configured to:
summarizing the subtasks of which the target processing strategy belongs to the parallel processing strategy based on the execution sequence of each subtask to obtain the total number of the subtasks needing parallel processing;
and respectively putting the aggregated subtasks into different parallel fragments based on the capacity and the total number of the parallel fragments.
In a possible embodiment, the subtasks whose target processing strategies belong to the serial processing strategies are respectively put into different serial slices, and the second putting module 604 is configured to:
summarizing each subtask of a target processing strategy belonging to a serial processing strategy according to different account information to obtain a plurality of summarized serial groups;
and respectively determining the serial fragments corresponding to the serial groups, and respectively putting the subtasks in different serial groups into the corresponding serial fragments.
In a possible embodiment, the subtasks in different serial packets are respectively put into the corresponding serial slices, and the second putting module 604 is configured to:
if the total number of the subtasks needing serial processing exceeds the target serial grouping of the capacity of the corresponding serial fragments, combining at least two serial fragments with the total number of the subtasks being zero into one summary serial fragment, and putting the subtasks in the target serial grouping into the summary serial fragments.
Based on the same inventive concept, an embodiment of the present application provides an electronic device, which can implement the functions of the page switching device discussed above, and referring to fig. 7, the device includes a processor 701 and a memory 702, where the memory 702 stores program codes, and when the program codes are executed by the processor, the processor 701 executes the steps in the serial-parallel hybrid processing method according to various exemplary embodiments of the present application described above in this specification.
Based on the same inventive concept, an embodiment of the present application provides a computer-readable storage medium, and a computer program product includes: computer program code which, when run on a computer, causes the computer to perform a method of serial-parallel hybrid processing as any of the methods discussed above. Since the principle of solving the problem of the computer-readable storage medium is similar to that of the serial-parallel hybrid processing method, the implementation of the computer-readable storage medium can refer to the implementation of the method, and repeated details are not repeated.
Based on the same inventive concept, the embodiment of the present application further provides a computer program product, where the computer program product includes: computer program code which, when run on a computer, causes the computer to perform any of the methods of serial-parallel hybrid processing as discussed previously. Because the principle of solving the problems of the computer program product is similar to that of the serial-parallel hybrid processing method, the implementation of the computer program product can refer to the implementation of the method, and repeated parts are not described again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of user-operated steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of serial-parallel hybrid processing, the method comprising:
dividing a main task to be processed into a plurality of subtasks based on the type of a service scene, wherein the type of the main task to be processed is a batch type to be paid;
determining a target processing strategy corresponding to each subtask based on a corresponding relation between a preset service scene type and the processing strategy;
respectively putting each subtask of the target processing strategy belonging to the parallel processing strategy into different parallel fragments;
respectively putting each subtask of the target processing strategy belonging to the serial processing strategy into different serial fragments;
and performing parallel processing on each parallel fragment and each serial fragment, performing parallel processing on each subtask in each parallel fragment, and performing serial processing on each subtask in the same serial fragment.
2. The method of claim 1, wherein determining whether the type of the pending primary task is a bulk to pay type is performed by:
extracting corresponding account information from each item to be processed included in the main task to be processed, wherein the total amount of each item to be processed exceeds a first preset threshold;
and if the payment types corresponding to the account information are not consistent, determining that the type of the main task to be processed is a batch payment type to be received, wherein the payment type comprises payment and collection.
3. The method of claim 2, wherein the dividing the main task to be processed into a plurality of subtasks based on the service scenario type comprises:
respectively determining the service scene type corresponding to each item to be processed based on the account information and the payment type corresponding to each item to be processed in the main task to be processed;
and dividing the to-be-processed items belonging to the same service scene type into the same subtask, wherein the service scene types are various.
4. The method of claim 1, wherein the placing the sub-tasks whose target processing strategies belong to the parallel processing strategies into different parallel slices respectively comprises:
summarizing the subtasks of which the target processing strategy belongs to the parallel processing strategy based on the execution sequence of each subtask to obtain the total number of the subtasks needing to be processed in parallel;
and respectively putting the various aggregated subtasks into different parallel fragments based on the capacity and the total number of the parallel fragments.
5. The method of claim 2, wherein the placing the respective subtasks of the target processing policy belonging to a serial processing policy into different serial slices comprises:
summarizing each subtask of the target processing strategy belonging to the serial processing strategy according to different account information to obtain a plurality of summarized serial groups;
and respectively determining the serial fragments corresponding to the serial groups, and respectively putting the subtasks in different serial groups into the corresponding serial fragments.
6. The method of claim 5, wherein the placing the subtasks in different ones of the serial packets into corresponding ones of the serial slices, respectively, comprises:
if the total number of the subtasks needing serial processing exceeds the target serial grouping of the capacity of the corresponding serial fragments, combining at least two serial fragments with the total number of the subtasks being zero into a summary serial fragment, and putting the subtasks in the target serial grouping into the summary serial fragment.
7. A serial-parallel hybrid processing apparatus, comprising:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing a main task to be processed into a plurality of subtasks based on the type of a service scene, and the type of the main task to be processed is a batch type to be paid;
the determining module is used for determining a target processing strategy corresponding to each subtask based on the corresponding relation between the preset service scene type and the processing strategy;
the first putting module is used for respectively putting each subtask of the target processing strategy belonging to the parallel processing strategy into different parallel fragments;
the second putting-in module is used for respectively putting each subtask of the target processing strategy belonging to the serial processing strategy into different serial fragments;
and the processing module is used for performing parallel processing on each parallel fragment and each serial fragment, performing parallel processing on each subtask in each parallel fragment, and performing serial processing on each subtask in the same serial fragment.
8. An electronic device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised by the method of any one of claims 1 to 6 in accordance with the obtained program instructions.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method according to any one of claims 1-6.
10. A computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of any of the preceding claims 1-6.
CN202210957421.8A 2022-08-10 2022-08-10 Serial-parallel hybrid processing method, device, equipment and medium Pending CN115237576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210957421.8A CN115237576A (en) 2022-08-10 2022-08-10 Serial-parallel hybrid processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210957421.8A CN115237576A (en) 2022-08-10 2022-08-10 Serial-parallel hybrid processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115237576A true CN115237576A (en) 2022-10-25

Family

ID=83680008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210957421.8A Pending CN115237576A (en) 2022-08-10 2022-08-10 Serial-parallel hybrid processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115237576A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190171763A1 (en) * 2017-12-06 2019-06-06 Futurewei Technologies, Inc. High-throughput distributed transaction management for globally consistent sharded oltp system and method of implementing
CN110209496A (en) * 2019-05-20 2019-09-06 中国平安财产保险股份有限公司 Task sharding method, device and sliced service device based on data processing
CN113297188A (en) * 2021-02-01 2021-08-24 淘宝(中国)软件有限公司 Data processing method and device
CN113485806A (en) * 2021-07-02 2021-10-08 中国建设银行股份有限公司 Method, device, equipment and computer readable medium for processing task

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190171763A1 (en) * 2017-12-06 2019-06-06 Futurewei Technologies, Inc. High-throughput distributed transaction management for globally consistent sharded oltp system and method of implementing
CN110209496A (en) * 2019-05-20 2019-09-06 中国平安财产保险股份有限公司 Task sharding method, device and sliced service device based on data processing
CN113297188A (en) * 2021-02-01 2021-08-24 淘宝(中国)软件有限公司 Data processing method and device
CN113485806A (en) * 2021-07-02 2021-10-08 中国建设银行股份有限公司 Method, device, equipment and computer readable medium for processing task

Similar Documents

Publication Publication Date Title
CN106033373B (en) Resources of virtual machine dispatching method and scheduling system in a kind of cloud computing platform
US8458712B2 (en) System and method for multi-level preemption scheduling in high performance processing
CN111708627B (en) Task scheduling method and device based on distributed scheduling framework
CN111124687B (en) CPU resource reservation method, device and related equipment
CN110308980A (en) Batch processing method, device, equipment and the storage medium of data
CN111932257A (en) Block chain parallelization processing method and device
EP3018581A1 (en) Data staging management system
CN112905342A (en) Resource scheduling method, device, equipment and computer readable storage medium
WO2014046885A2 (en) Concurrency identification for processing of multistage workflows
CN113342886A (en) Data exchange method and device
CN113626173B (en) Scheduling method, scheduling device and storage medium
CN113342526B (en) Dynamic management and control method, system, terminal and medium for cloud computing mobile network resources
CN106874080B (en) Data calculation method and system based on distributed server cluster
CN112231053B (en) Load balancing service distribution method and device
CN115237576A (en) Serial-parallel hybrid processing method, device, equipment and medium
CN111143063B (en) Task resource reservation method and device
CN111708618A (en) Processing method and device based on Java multithreading
CN107544840A (en) A kind of process management method and device
CN115168014A (en) Job scheduling method and device
CN109086132A (en) A kind of recognition of face task balance call method, device and terminal device
CN111176847B (en) Method and device for optimizing performance of big data cluster on physical core ultra-multithreading server
CN113935694A (en) Goods inventory pre-occupation method and device and goods inventory system
CN114489970A (en) Method and system for realizing queue sequencing by using scheduling plug-in Kubernetes
CN109981731B (en) Data processing method and equipment
CN112506640A (en) Multiprocessor architecture for encryption operation chip and allocation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination