CN107423028A - A kind of parallel scheduling method of extensive flow - Google Patents
A kind of parallel scheduling method of extensive flow Download PDFInfo
- Publication number
- CN107423028A CN107423028A CN201610343434.0A CN201610343434A CN107423028A CN 107423028 A CN107423028 A CN 107423028A CN 201610343434 A CN201610343434 A CN 201610343434A CN 107423028 A CN107423028 A CN 107423028A
- Authority
- CN
- China
- Prior art keywords
- pipeline
- flow
- node
- processing
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000008569 process Effects 0.000 claims abstract description 45
- 230000009471 action Effects 0.000 claims abstract description 35
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000005111 flow chemistry technique Methods 0.000 claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 3
- 238000003672 processing method Methods 0.000 claims abstract description 3
- 230000002085 persistent effect Effects 0.000 claims description 3
- 238000011946 reduction process Methods 0.000 abstract description 2
- 230000007246 mechanism Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3818—Decoding for concurrent execution
- G06F9/3822—Parallel decoding, e.g. parallel decode units
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a kind of structure for changing flow processing execution and pattern, the parallel scheduling method of extensive flow for making system concurrency caused by the server resource demand, reduction process action example quantity that the process performance efficiency of process action example breaks off relations with the sum of process action example, is substantially reducing in extensive flow processing conflict, this method builds processing pipeline in units of every template node, and pipeline physically corresponds to process, thread or event;The corresponding pipeline example of each node.Pipeline extracts template and nodal information, while precompile or the execution code for loading node in initialization;Moreover, each pipeline is independently constantly run, complete all flow processings and calculate, pipeline has identical processing method.Present invention is particularly suitable in the scheduling processing procedure of extensive flow.
Description
Technical field
The present invention relates to the flow scheduling method in process management system or flow processing system, and in particular to
To a kind of scheduling method of extensive flow.
Background technology
In process management system or flow processing system, in design, flow scheme design or administrative staff's foundation
Demand Design goes out flow template (Process Template) and is stored in system, and a flow template is by one
Individual or multiple nodes (Node) are formed, an execution step in each node template definitim flow.
During operation, operation system or up-stream system are corresponded to according to the attribute and relation of input information and data
On flow template, system is according to the new flow instance of the template establishment (Process Instance), each pen
New business or upstream data all by corresponding templates and create a new flow instance.Flow instance have with
The same node of its source template and relation, system according to flow instance node sequence and logical relation
Perform forward successively until completing.All flow instances from same flow template, only flow instance shape
State has different (information and data that are carried on flow instance), and other all sames, flow instance is flowing
Journey is referred to as process action example before completing.
Existing flow performs and treatment mechanism, is independently to separate process action example, be independent process, only
It is vertical to perform, it is that processing execution is carried out in units of process action example, the process action example being each performed
All it is performed in independent thread (Thread) or pseudo- thread or pipeline (Pipeline), such as:
In current mechanism, 100 process action examples for needing to perform can be performed by dividing into 100 threads, or point
Into 10 threads, flow instance is performed successively to be lined up serial manner inside each thread.
The problem of existing treatment mechanism is process action example quantity when carrying out extensive flow operation or processing
Can very greatly (100,000 grades or million grades), the efficiency independently executed is low, the unit of each process action example
Time-consuming meeting is linearly increasing with the rising of flow instance sum, and inquiry, renewal etc. are all to access the property operated
It can all can decline comprehensively.Meanwhile existing treatment mechanism causes to take huge calculating process resource, usual one
Instantaneously burden is no more than 1,000 process action example to platform General Server, then 100,000 process action realities
Example just needs the resource of up to a hundred servers.
The content of the invention
The technical problems to be solved by the invention are:A kind of structure and mould for changing flow processing execution is provided
The sum of formula, the process performance efficiency for making process action example and process action example breaks off relations, is substantially reducing at
System caused by server resource demand, reduction process action example quantity in extensive flow processing is simultaneously
Send out the parallel scheduling method of extensive flow of conflict.
To solve the above problems, present invention employs a kind of parallel scheduling method of extensive flow, each flow
It is made up of several every template nodes, described parallel scheduling method is:
1) processing pipeline is built in units of every template node, pipeline physically corresponds to process, thread or thing
Part;The corresponding pipeline example of each node.Pipeline extracts template and nodal information in initialization, together
When precompile or load node execution code.
2) each pipeline is independently constantly run, and is completed all flow processings and is calculated, pipeline has identical
Processing method, it is concretely comprised the following steps:
A) stream that all present nodes are the knot is extracted in inquiry from persistent storage or flow instance pond
Journey active instance, it is as a result process action example collection.
B) set in step A is circulated in a manner of concurrently or sequentially, joined process action example as input
Number, node processing action is performed, obtains the result phase and output data of the process action example.Press simultaneously
According to flow template node sets, the node of the process action example is advanced into next.
C) in step B after circular treatment, all process actions obtained in corresponding A set of steps are real
The result phase set and output data set of example.
D) pipeline carries out disposably updating or storing with batch processing mode to result phase set in step C, together
When one-time write or storage carried out to output data set in step C with batch processing mode.
E) pipeline completes epicycle processing work, and execution is cleared up afterwards, returns to step A and repeats to continue.
The beneficial effects of the invention are as follows:The present invention using two dimension quote sheet form by the use of node arrange (Column) as
Perform entrance and instead of the existing current processing mode using flow instance as execution entrance, by existing horizontal stroke
It is changed into longitudinal direction calculating processing mode to processing mode is calculated, solves in large-scale flow processing, flow
(such as more than 10,000 active instances), the execution efficiency that existing passing method occurs when active instance number is huge
The problem of low and a large amount of consumption process resources.
Brief description of the drawings
Fig. 1 is the schematic diagram of parallel scheduling method of the present invention.
Embodiment
Below in conjunction with the accompanying drawings, a kind of tool of the parallel scheduling method of extensive flow of the present invention is described in detail
Body embodiment:
A kind of extensive parallel scheduling method of flow of the present invention based on flow data structure draw for two dimension
With table structure (Reference Table, hereinafter referred to as bivariate table), as shown in figure 1, each bivariate table
A kind of flow is represented, wherein table row (Column) is flow template (Process Template), each
Arrange to should be in template a flow nodes (Node);Wherein table stringer (Row) is process action example
(Process Instance), a corresponding process action example per a line;Each cell is corresponding one
The row node on the row flow instance;The bivariate table can according to being actually needed using array (Array) or
Chained list (Linked List) or tree (Tree) or database table (Database Table) etc. realize the number
According to structure.
Above-mentioned data structure is described as follows with pseudo-code:
A kind of parallel scheduling method of extensive flow of the present invention, it is concretely comprised the following steps:
1) processing pipeline (Pipeline) is built in units of every template node, pipeline physically can be according to reality
Border demand corresponds to process, thread or event.The corresponding pipeline example of each node.Pipeline is initial
Template and nodal information, while precompile or the execution code for loading node are extracted during change;
2) the lasting operation of each pipeline independence, all flow processings is completed and are calculated, pipeline has complete phase
Same processing calculative strategy and algorithm, it is as follows:
A) stream that all present nodes are the knot is extracted in inquiry from persistent storage or flow instance pond
Journey active instance, it is as a result process action example collection.
B) set in step A is circulated in a manner of concurrently or sequentially, joined process action example as input
Number, node processing action is performed, obtains the result phase and output data of the process action example.Press simultaneously
According to flow template node sets, the node of the process action example is advanced into next.
C) in step B after circular treatment, all process actions obtained in corresponding A set of steps are real
The result phase set and output data set of example.
D) pipeline carries out disposably updating or storing with batch processing mode to result phase set in step C, together
When one-time write or storage carried out to output data set in step C with batch processing mode.
E) pipeline completes epicycle processing work, and execution is cleared up afterwards, and is entered next round and handled, i.e.,:Return to
Step A repeats to continue.
Above-mentioned policing algorithm is described as follows that (part pseudo-code variable and object reference are from above-mentioned data structure with pseudo-code
Pseudo-code):
Obviously, above-mentioned implementation is only intended to clearly illustrate example, and not to the limit of embodiment
It is fixed.For those of ordinary skill in the field, other can also be made on the basis of the above description
Various forms of changes or variation.There is no necessity and possibility to exhaust all the enbodiments.And thus
Among the obvious changes or variations amplified out is still in the protection domain of the invention.
Claims (1)
1. a kind of parallel scheduling method of extensive flow, each flow are made up of several every template nodes, its
It is characterised by, described parallel scheduling method is:
1) processing pipeline is built in units of every template node, pipeline physically corresponds to process, thread or thing
Part;The corresponding pipeline example of each node.Pipeline extracts template and nodal information in initialization, together
When precompile or load node execution code.
2) each pipeline is independently constantly run, and is completed all flow processings and is calculated, pipeline has identical
Processing method, it is concretely comprised the following steps:
A) stream that all present nodes are the knot is extracted in inquiry from persistent storage or flow instance pond
Journey active instance, it is as a result process action example collection.
B) set in step A is circulated in a manner of concurrently or sequentially, joined process action example as input
Number, node processing action is performed, obtains the result phase and output data of the process action example.Press simultaneously
According to flow template node sets, the node of the process action example is advanced into next.
C) in step B after circular treatment, all process actions obtained in corresponding A set of steps are real
The result phase set and output data set of example.
D) pipeline carries out disposably updating or storing with batch processing mode to result phase set in step C, together
When one-time write or storage carried out to output data set in step C with batch processing mode.
E) pipeline completes epicycle processing work, and execution is cleared up afterwards, returns to step A and repeats to continue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610343434.0A CN107423028A (en) | 2016-05-23 | 2016-05-23 | A kind of parallel scheduling method of extensive flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610343434.0A CN107423028A (en) | 2016-05-23 | 2016-05-23 | A kind of parallel scheduling method of extensive flow |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107423028A true CN107423028A (en) | 2017-12-01 |
Family
ID=60421910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610343434.0A Pending CN107423028A (en) | 2016-05-23 | 2016-05-23 | A kind of parallel scheduling method of extensive flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107423028A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109062691A (en) * | 2018-07-19 | 2018-12-21 | 芯视图(常州)微电子有限公司 | A kind of light weight vertex coloring thread generates the method and device of distribution |
CN111399851A (en) * | 2020-06-06 | 2020-07-10 | 四川新网银行股份有限公司 | Batch processing execution method based on distributed system |
WO2021135699A1 (en) * | 2019-12-31 | 2021-07-08 | 思必驰科技股份有限公司 | Decision scheduling customization method and device based on information flow |
-
2016
- 2016-05-23 CN CN201610343434.0A patent/CN107423028A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109062691A (en) * | 2018-07-19 | 2018-12-21 | 芯视图(常州)微电子有限公司 | A kind of light weight vertex coloring thread generates the method and device of distribution |
CN109062691B (en) * | 2018-07-19 | 2023-07-04 | 南京军微半导体科技有限公司 | Method for generating and distributing lightweight vertex dyeing threads |
WO2021135699A1 (en) * | 2019-12-31 | 2021-07-08 | 思必驰科技股份有限公司 | Decision scheduling customization method and device based on information flow |
CN111399851A (en) * | 2020-06-06 | 2020-07-10 | 四川新网银行股份有限公司 | Batch processing execution method based on distributed system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pan et al. | An improved migrating birds optimisation for a hybrid flowshop scheduling with total flowtime minimisation | |
Iandola et al. | Firecaffe: near-linear acceleration of deep neural network training on compute clusters | |
Zhou et al. | Invasive weed optimization algorithm for optimization no-idle flow shop scheduling problem | |
Jin et al. | A scalable hierarchical clustering algorithm using spark | |
Zou et al. | Mariana: Tencent deep learning platform and its applications | |
CN104750780B (en) | A kind of Hadoop configuration parameter optimization methods based on statistical analysis | |
CN103324765A (en) | Multi-core synchronization data query optimization method based on column storage | |
CN105373517A (en) | Spark-based distributed matrix inversion parallel operation method | |
CN107609141A (en) | It is a kind of that quick modelling method of probabilistic is carried out to extensive renewable energy source data | |
CN107423028A (en) | A kind of parallel scheduling method of extensive flow | |
Rico-Garcia et al. | Parallel implementation of metaheuristics for optimizing tool path computation on CNC machining | |
CN105183880A (en) | Hash join method and device | |
Wang et al. | Parallel k-pso based on mapreduce | |
Tschaikowski et al. | Tackling continuous state-space explosion in a Markovian process algebra | |
CN106156142A (en) | The processing method of a kind of text cluster, server and system | |
CN110473593A (en) | A kind of Smith-Waterman algorithm implementation method and device based on FPGA | |
CN107436865A (en) | A kind of word alignment training method, machine translation method and system | |
Sun et al. | An improvement to feature selection of random forests on spark | |
CN105653680A (en) | Method and system for storing data on the basis of document database | |
CN106970840A (en) | A kind of Method for HW/SW partitioning of combination task scheduling | |
CN101093472A (en) | Method for calculating expression in WEB dynamic line type report forms | |
CN105631047A (en) | Hierarchically-cascaded data processing method and hierarchically-cascaded data processing system | |
CN105701291A (en) | Finite element analysis device, information acquisition method and method for parallel generation of system matrix | |
CN105630896A (en) | Method for quickly importing mass data | |
CN110059378A (en) | A kind of automated manufacturing system Petri network state generation method based on GPU parallel computation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171201 |
|
WD01 | Invention patent application deemed withdrawn after publication |