CN106681820A - Message combination based extensible big data computing method - Google Patents

Message combination based extensible big data computing method Download PDF

Info

Publication number
CN106681820A
CN106681820A CN201611252002.5A CN201611252002A CN106681820A CN 106681820 A CN106681820 A CN 106681820A CN 201611252002 A CN201611252002 A CN 201611252002A CN 106681820 A CN106681820 A CN 106681820A
Authority
CN
China
Prior art keywords
big data
task
message
calculating
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611252002.5A
Other languages
Chinese (zh)
Other versions
CN106681820B (en
Inventor
汤小春
田凯飞
段慧芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201611252002.5A priority Critical patent/CN106681820B/en
Publication of CN106681820A publication Critical patent/CN106681820A/en
Application granted granted Critical
Publication of CN106681820B publication Critical patent/CN106681820B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a message combination based extensible big data computing method which is used for solve the technical problem that the conventional big data computing methods are poor in practicality. The technical scheme includes: establishing an abstract big data computing job topological structure; deploying a big data computing job; generating a computing task execution plan in the big data computing job; dispatching a big data computing task; performing load balancing in a computing task execution process; executing the big data computing task; and ending a job life while the job is executed completely. The topological structure of the computing task and the computing task are divided, the topological structure defines an abstract of the big data computing job, the computing task represent a real execution process, and message transmission acts as a control link between the abstract task and the computing task. A master-slave structural mode is adopted, and a dispatcher on a master node can control execution of the computing task on each slave cluster node according to the topological structure of the abstract task. The message combination based extensible big data computing method has excellent practicality.

Description

Expansible big data computational methods based on message combination
Technical field
The present invention relates to a kind of big data computational methods, more particularly to a kind of expansible big data meter based on message combination Calculation method.
Background technology
Document " framework big data:Challenge, status and prospects, Chinese journal of computers, 2011, Vol34 (10), p1741-175 " Disclose the important big data computation model of three kinds for presently, there are:The MapReduce computation schemas of Hadoop;Comprehensive batch processing Calculate, streaming is calculated, iterate to calculate and scheme the Spark systems for calculating;And the computation schema calculated based on internal memory.Hadoop exists Batch processing field application it is very good, and in terms of diagram data calculating poor-performing;Spark systems calculate mould as a mixing Formula;Calculating based on internal memory is very high for stream data performance.But, these computation schemas are remained at some not in some fields Foot part:(1) once completing, calculation scale determines substantially, it is impossible to dynamic expansion for programming;(2) once the big data of user is processed Program does a small variation, it is necessary to recompilate whole calculating process;(3) topological structure between task is by data relationship Determine, user cannot autonomous control;(4) type of calculating task is single, it is impossible to directly using the traditional calculations industry for having existed Business, it is impossible to mix with other heterogeneous services.
The content of the invention
In order to overcome the shortcomings of existing big data computational methods poor practicability, the present invention provides a kind of based on message combination Expansible big data computational methods.The method initially sets up abstract big data and calculates operation topological structure, then disposes big data Operation is calculated, big being counted of generation calculate task execution plan according in calculating operation, dispatch big data calculating task, calculate and appoint The load balancing of implementation procedure of being engaged in, performs big data calculating task.When operation is finished, or execution condition has been unsatisfactory for, or Operation is cancelled by user, and the operation will withdraw from system, and operation life terminates.Due to by the topological structure of calculating task with meter Calculation task is split, and topological structure defines big data and calculates the abstract of operation, and calculating task represents real implementation procedure, and message is passed Pass as the control tie between abstract task and calculating task.Using host-guest architecture pattern, scheduler on host node according to The topological structure of abstract task controls the calculating task to be performed from clustered node at each, and practicality is good.
The technical solution adopted for the present invention to solve the technical problems:A kind of expansible big data meter based on message combination Calculation method, is characterized in comprising the following steps:
Step one, abstract big data calculate the foundation of operation topological structure:First by big data calculate assignment partition into A series of small task, is combined between small task using the order, circulation, condition selection and parallel organization that customize.Sequentially, Circulation, condition selection and parallel organization are used for controlling small tasks carrying process.Big data is set up using text mode to make Industry.
Step 2, big data calculate the deployment of operation:The corresponding calculating task of abstract task is deployed to cluster section by user Point on.
Step 3, the big generation counted according to calculating task execution plan in calculating operation:The abstract calculating task quilt of user It is submitted to management node.Management node checks that big data calculates the grammer and semanteme for defining file, forms production plan and protects Deposit.
The scheduling of step 4, big data calculating task:Big data calculates job engine and obtains task execution plan, forms one The message of individual one sends sequence.Message transmission plan forms the different stages according to concurrent, order and conditional relationship, same Message in stage sends simultaneously.A upper stage reply message arrival after, the message for opening next stage sends.
The load balancing of step 5, calculating task implementation procedure:Task allocating module according to the dependence between task, The order that message sends is optimized, selects optimal tasks carrying node to start calculating object.Management node can also Load according to each working node carries out load balancing, to improve performance.Each working node is safeguarded and distributes to its calculating The state of object, the state for calculating object can reflect in the message object of management node again.
The execution of step 6, big data calculating task:Job engine obtain each tasks carrying needs parameter after, just to Perform node transmission and perform request.Perform node after receiving the request, start to perform the method that object is included that calculates, while to work It is R that industry engine is returned and calculates Obj State, and job engine is by state recording to system shared buffer.Once object is calculated to hold Row is completed, and is calculated object and is returned to D-state or E-state to main controlled node by clustered node, and result of calculation is also returned to Main controlled node.After the engine of main controlled node obtains the result of calculation of calculating object, the state of abstract object is changed, carry out next The transmission of request.
Step 7, big data calculate operation and wither away:When operation is finished, or execution condition has been unsatisfactory for, or operation quilt User cancels, and the operation will withdraw from system, and operation life terminates.
The beneficial effects of the invention are as follows:The method initially sets up abstract big data and calculates operation topological structure, then disposes Big data calculates operation, and big being counted of generation calculate task execution plan according in calculating operation, dispatch big data calculating task, carry out The load balancing of calculating task implementation procedure, performs big data calculating task.When operation is finished, or execution condition has been discontented with Foot, or operation is cancelled by user, and the operation will withdraw from system, and operation life terminates.Due to the topology of calculating task being tied Structure is split with calculating task, and topological structure defines big data and calculates the abstract of operation, and calculating task represents real implementation procedure, Message transmission is used as the control tie between abstract task and calculating task.Using host-guest architecture pattern, the scheduling on host node Device controls calculating task to be performed from clustered node at each according to the topological structure of abstract task, and practicality is good.Test table Bright, the cost for reducing the IT infrastructure input of enterprise reaches 40%, reduces the artificial running cost more than 30% of enterprise.
The present invention is elaborated with reference to the accompanying drawings and detailed description.
Brief description of the drawings
Fig. 1 is message object and calculates object sample;
Fig. 2 is the graph-based that big data calculates operation;
Fig. 3 is the definition file that a big data calculates operation;
Fig. 4 is the computation model of big data computational methods;
Fig. 5 is an example for strongly connected graph;
Fig. 6 is the system global structure of big data computational methods;
Fig. 7 is K-means computation model samples;
Specific embodiment
Reference picture 1-7.Expansible big data computational methods of the present invention based on message combination are comprised the following steps that:
1. the definition relevant with the present invention.
Define 1. message objects:The left part of reference picture 1, message object contains parameter, stagein, The attribute of fixation stageout etc. related to calculating, be one it is specific calculate it is abstract.One message object is described as follows:
O=(m, parameter, stagein, stageout, code, state)
Wherein, m represents message name, and message is used for controlling the execution of an executable component or module.Parameter is to disappear The |input paramete of breath.The input data description that Stagein needs when being executable component or module execution.Stageout is to hold The result output description of row component or module.Code and state are output parameters.Code is the executable component or module for receiving Return value.State is the execution state of message object, o.state={ W, R, D }, and when being ready for sending message, state is W; After sending message to executable component or module, state is R;Executable component or module are performed after terminating, and state is end of a period State D.If executable component or module are correctly returned, state is Do, conversely, state is exactly De.
Define 2. and calculate object:Executable component or module are referred to as calculating object, and it can also be a complete program Or simple order.
The right part of reference picture 1, an example of the calculating pair as if mysort classes, contained in such one it is entitled The static method of compute (), actually one calculates object and may include 1 or k method, and these methods are by program Logic determine, calculate object all methods has all performed after, the state of message object is just end of a period state.Certainly calculate During any one method make a mistake, may result in calculate object end of a period state be De.
Object is calculated after once being calculated, it is possible to create an output, this output is used as other calculating Object itself enters the input for calculating next time.
Define 3. Parallel Objects:Multiple message objects can be triggered and enter wait state.It is used for controlling multiple message pair As carrying out parallel transmission message, its own does not send message to executive module.
Define 4. and redirect object:Turn to jump the circulation execution that object is usually used to control some message objects.
Define 5. conditional objects:Conditional object is the special case of concurrent objects.Multiple abstract objects or message are triggered simultaneously State change, the message object for meeting condition enters wait state, and the message object for being unsatisfactory for condition is directly entered end of a period shape State.
Define the combining structure of 6. message:Big data calculates operation and includes substantial amounts of calculating object, therefore just correspond to big The message object of amount.These message how are organized, is an important problem.With reference to table 1, in order to show interacting for message Journey, using order, circulation, condition selection and the parallel combining structure to define message, for the message of control message object Interaction sequences.
The combining structure of the message of table 1
Define 7. big data operations:Big data calculates operation and is generally made up of substantial amounts of executable component.Executable component Between carry out interacting message according to certain rule, finally realize the calculating requirement of user.Therefore, big data calculating operation can With the abstract set as interacting message, the transmission for carrying out control message by the combining structure of message is sequentially.It is specific big for each Data calculate operation, and executable computation module can be disposed on Distributed Cluster, and message pair is disposed on unified management node The combination of elephant.
The combination of message object is disposed on unified management node, can be described in the form of text, referred to as operation Define file.Comprising message object, order object, turn jump object, branch's object and Parallel Object in job definition file, its Middle order object, turn jump object, branch's object and a Parallel Object are collectively referred to as control object.It is right by controlling between message object Pictograph calculates the definition file of operation into big data.In file is defined, each message object and control object are unique Character string identified.
The definition file of big data operation is deployed in a management node, and is calculated object and be dispersed to different work On node.During Job execution, scheduling engine parsing defines file, according to the logical relation between message object and control object, Message, the order of transmission of control object decision message are sent from message object to corresponding calculating object.When all of message pair After all having sent message, once calculate and just terminate.If user needs to perform this computing again, message can be again started up Send.
2. it is of the invention to realize structure.
(1) big data job definition file citing.
Fig. 2 describes a big operation for graph-based, wherein 6 message object circle addend words are represented, (p1, P2) it is a Parallel Object, (f1, f2) is a conditional object, and g (3,2) is one and turns to jump object.
Fig. 3 is the definition file that above-mentioned big data calculates operation, and wherein MSG is the keyword of message object, and Para is simultaneously Row object keywords, If is the keyword of conditional object, and it can lead to comprising Alert condition a condition, condition Cross the return value expression of the previous objects of outside setting, or If objects;Meet Alert condition, perform the first point Branch, otherwise performs second branch.Goto is the keyword for turning to jump object.Paired " { } " is the pass for representing segmentation message combination Key word.
(2) structure of computation model.
The state of message object is from waiting the transition process of W, operation R D to the end once calculating.Fig. 4 is illustrated once The implementation procedure of calculating:Calculating process is change of the message object from W state to D-state, i.e., (start, W) (s, R) (e, D sequence).When message object receives start events, into W states;The message object of the state in W receives s things Part, it just sends a piece of news and calls to calculating object, and oneself state is changed into R.After calculating object receives message, start to hold OK, message will be returned after end of run and is sent to message object, after message object receives the return value for calculating object, its state It is arranged to D-state.If calculating the return value of object in the scope that message object is specified, state is normal end of a period, i.e. Do State, otherwise, state is set to wrong end of a period, i.e. De states.
One calculates object and may include 1 or k method, and these methods are determined that it is right to calculate by the logic of program After all methods of elephant have all been performed, the state of message object is just end of a period state.Any one method in certain calculating process Make a mistake, the end of a period state that may result in calculating object is De.
Object is calculated after once being calculated, it is possible to create an output, this output is used as other calculating Object itself enters the input for calculating next time.
Fig. 5 is one example of strongly connected graph of a calculating, and computation model is substantially described with this.It is every in Fig. 5 There is a value on individual summit.The side of the line representative graph between summit A, B, C, D, the vertex value of the digitized representation figure on summit.Pass through The maximum for propagating summit checks the connectedness of figure to the strategy on each summit of figure.Summit A, B, C, D represent 4 meters Object is calculated, is deployed on different working nodes.S represents the message object set of A, B, C, D, and it is located at a management node On, the message object during the solid wire sent from S represents S sends message to object A, B, C, D is calculated, and dotted line represented and pushed up when calculating The transmission of point value.In this computation model, 4 message objects of S are combined in a parallel manner, and 4 concurrent message are arrived Up to after calculating object, representative is once calculated.Return message in figure is omitted.
In calculating process each time, S sends message to each calculating object, calculate object receive after message by Calculate, return the result to S, when the return value of summit A, B, C, D no longer changes, calculating terminates, algorithm terminates. In (a) of Fig. 5, S sends a calculating message to object A is calculated, and after A is calculated, it is 6 to obtain maximum, changes the value of itself simultaneously Return to message;S also sends message to B, C, D simultaneously, and the calculating object that each summit represents once is calculated, its own State is changed into Fig. 5 (b).Dotted line in Fig. 5 represents the input relation for calculating object.By 4 times calculate after, (c) of Fig. 5 and D the vertex value represented in () no longer changes, calculating process terminates, and has obtained a connected graph.Carried out greatly using the method Data calculate, after programming is completed, that is, calculate business determine after, user by changing big data job definition file, not only Calculation scale can be carried out dynamic expansion, such as increase more calculate nodes, and can controlled between each task Topological structure, reuses existing calculating business, improves the topological structure flexibility of big data calculating and calculates operation With the independence of description.
Fig. 6 is the general structure of the computation model, including two kinds of working cell:Management node and multiple work sections Point.Management node is responsible for the scheduling of operation and the distribution of task.Whole system supports HDFS or runs other storage systems System, or local file system, are mainly used in the persistence of data.
(3) input/output data.
Input data includes job definition file, calculates object and calculates data.Job definition file uses text side Formula is write, and is stored in the local file system of main controlled node.Calculate object can be JAVA forms bytecode, or Other binary files, or a sql command, they can be stored in the local file system of calculate node or In HDFS file system.The data that calculating object is included then are stored using local file or distributed file system, but work The calculating object made on node is able to access that these data.
The output information for calculating object is stored in the local disk of working node or HDFS.Log information includes message Object starts over time, state change event etc. with calculating object.
(4) realization of management node and working node.
Management node is specifically included by sending the commands to working node the work between co-ordination node:1. open Move a new calculating;2. terminate calculating;3. the feedback of status of object is calculated;4. inquiry state etc..Management node waits all The message of working node, and guides what working node next step will do.Therefore management node also controls the synchronization for calculating, every At first, management node can all send initiation message for individual calculating.
Result, the Status Flag of working node storage calculating object, and maintenance is current and calculates required disappearing next time Breath queue.Each working node is made up of 3 threads:
1. a computational threads are used for the execution for calculating object in working node, and it also safeguards that an output message is delayed Rush area.After buffering area is full, otherwise it is sent to communication thread by network, otherwise it is directly passed to local message parsing line Journey.
2. communication thread is used to send and receive the message in buffering area, is also used for simply coordinating and managing node and work Message between node.When a message buffer receives message, it is delivered to resolver thread.
3. message parser thread is parsed to the message in input message buffer, and transmitted message is put into mutually accrued In the received message queue of calculation thread, for execution next time.
(5) the calculating object on working node.
Inside the compute () function for calculating object, outside input data, configuration file and one can be accessed complete The operation of office's object.Message, data sharing and statistical summaries that global object is used to coordinate and manage between node and working node. Start calculating every time, update the object that working node locally maps, when end is calculated, calculate object to management node Notify done state.
3. it is of the invention to realize sample.
Using the big data computation model based on message transmission, K-means calculating process is improved, algorithm steps It is as follows:
(1) k point is randomly selected out from data as initial cluster center, by each cluster of this center representative;
(2) initial data is distributed to m calculate node, calculates its distance for arriving this k cluster centre, and point is grouped into from it Nearest cluster centre;
(3) cluster centre is adjusted.The center of cluster is moved on to the geometric center of cluster;
(4) the 2nd is repeated) step, until cluster centre no longer changes, now algorithmic statement.
Table 2 is the speed-up ratio that improved K-means algorithms run in different number calculate nodes.
The speed-up ratio of the K-means algorithms of table 2
Fig. 7 is the process for realizing K-means innovatory algorithms, and wherein task2 is a Parallel Object, and task4 is one and turns Object is jumped, other are all message objects.

Claims (1)

1. it is a kind of based on message combination expansible big data computational methods, it is characterised in that comprise the following steps:
Step one, abstract big data calculate the foundation of operation topological structure:It is into one by big data calculating assignment partition first The small task of row, is combined between small task using the order, circulation, condition selection and parallel organization that customize;Sequentially, follow Ring, condition selection and parallel organization are used for controlling small tasks carrying process;Big data operation is set up using text mode;
Step 2, big data calculate the deployment of operation:Be deployed to the corresponding calculating task of abstract task on clustered node by user;
Step 3, the big generation counted according to calculating task execution plan in calculating operation:The abstract calculating task of user is submitted To management node;Management node checks that big data calculates the grammer and semanteme for defining file, forms production plan and preserves;
The scheduling of step 4, big data calculating task:Big data calculates job engine and obtains task execution plan, forms one one Individual message sends sequence;Message transmission plan forms different stages, same stage according to concurrent, order and conditional relationship Interior message sends simultaneously;A upper stage reply message arrival after, the message for opening next stage sends;
The load balancing of step 5, calculating task implementation procedure:Task allocating module is offseted according to the dependence between task Cease the order for sending to optimize, select optimal tasks carrying node to start calculating object;Management node can also basis The load of each working node carries out load balancing, to improve performance;Each working node is safeguarded and distributes to its calculating object State, calculate object state can reflect in the message object of management node again;
The execution of step 6, big data calculating task:After job engine obtains the parameter of each tasks carrying needs, just to execution Node sends and performs request;Perform node after receiving the request, start to perform the method that object is included that calculates, while drawing to operation It is R to hold up return and calculate Obj State, and job engine is by state recording to system shared buffer;Once object is calculated to have performed Into calculating object returns to D-state or E-state by clustered node to main controlled node, and result of calculation is also returned into master control Node;After the engine of main controlled node obtains the result of calculation of calculating object, the state of abstract object is changed, carry out next request Transmission;
Step 7, big data calculate operation and wither away:When operation is finished, or execution condition has been unsatisfactory for, or operation is by user Cancel, the operation will withdraw from system, and operation life terminates.
CN201611252002.5A 2016-12-30 2016-12-30 Extensible big data computing method based on message combination Expired - Fee Related CN106681820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611252002.5A CN106681820B (en) 2016-12-30 2016-12-30 Extensible big data computing method based on message combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611252002.5A CN106681820B (en) 2016-12-30 2016-12-30 Extensible big data computing method based on message combination

Publications (2)

Publication Number Publication Date
CN106681820A true CN106681820A (en) 2017-05-17
CN106681820B CN106681820B (en) 2020-05-01

Family

ID=58872712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611252002.5A Expired - Fee Related CN106681820B (en) 2016-12-30 2016-12-30 Extensible big data computing method based on message combination

Country Status (1)

Country Link
CN (1) CN106681820B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059829A (en) * 2019-04-30 2019-07-26 济南浪潮高新科技投资发展有限公司 A kind of asynchronous parameters server efficient parallel framework and method
CN110222005A (en) * 2019-07-15 2019-09-10 北京一流科技有限公司 Data processing system and its method for isomery framework
CN110245108A (en) * 2019-07-15 2019-09-17 北京一流科技有限公司 It executes body creation system and executes body creation method
CN110262995A (en) * 2019-07-15 2019-09-20 北京一流科技有限公司 It executes body creation system and executes body creation method
CN110347636A (en) * 2019-07-15 2019-10-18 北京一流科技有限公司 Data execute body and its data processing method
CN113537937A (en) * 2021-07-16 2021-10-22 重庆富民银行股份有限公司 Task arrangement method, device and equipment based on topological sorting and storage medium
CN115601195A (en) * 2022-10-17 2023-01-13 桂林电子科技大学(Cn) Transaction bidirectional recommendation system and method based on real-time label of power user
CN116150263A (en) * 2022-10-11 2023-05-23 中国兵器工业计算机应用技术研究所 Distributed graph calculation engine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140222745A1 (en) * 2013-02-05 2014-08-07 International Business Machines Corporation Dynamic Model-Based Analysis of Data Centers
CN104904160A (en) * 2012-11-09 2015-09-09 思杰***有限公司 Systems and methods for appflow for datastream
CN104978232A (en) * 2014-04-09 2015-10-14 阿里巴巴集团控股有限公司 Computation resource capacity expansion method for real-time stream-oriented computation, computation resource release method for real-time stream-oriented computation, computation resource capacity expansion device for real-time stream-oriented computation and computation resource release device for real-time stream-oriented computation
CN105100267A (en) * 2015-08-24 2015-11-25 用友网络科技股份有限公司 Deployment apparatus and deployment method for large enterprise private cloud
CN105426255A (en) * 2015-12-28 2016-03-23 重庆邮电大学 Network I/O (input/output) cost evaluation based ReduceTask data locality scheduling method for Hadoop big data platform
CN105930360A (en) * 2016-04-11 2016-09-07 云南省国家税务局 Storm based stream computing frame text index method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104904160A (en) * 2012-11-09 2015-09-09 思杰***有限公司 Systems and methods for appflow for datastream
US20140222745A1 (en) * 2013-02-05 2014-08-07 International Business Machines Corporation Dynamic Model-Based Analysis of Data Centers
CN104978232A (en) * 2014-04-09 2015-10-14 阿里巴巴集团控股有限公司 Computation resource capacity expansion method for real-time stream-oriented computation, computation resource release method for real-time stream-oriented computation, computation resource capacity expansion device for real-time stream-oriented computation and computation resource release device for real-time stream-oriented computation
CN105100267A (en) * 2015-08-24 2015-11-25 用友网络科技股份有限公司 Deployment apparatus and deployment method for large enterprise private cloud
CN105426255A (en) * 2015-12-28 2016-03-23 重庆邮电大学 Network I/O (input/output) cost evaluation based ReduceTask data locality scheduling method for Hadoop big data platform
CN105930360A (en) * 2016-04-11 2016-09-07 云南省国家税务局 Storm based stream computing frame text index method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汤小春,李洪华: "分布式***中计算作业流的均衡调度算法", 《计算机工程》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059829A (en) * 2019-04-30 2019-07-26 济南浪潮高新科技投资发展有限公司 A kind of asynchronous parameters server efficient parallel framework and method
CN110222005A (en) * 2019-07-15 2019-09-10 北京一流科技有限公司 Data processing system and its method for isomery framework
CN110245108A (en) * 2019-07-15 2019-09-17 北京一流科技有限公司 It executes body creation system and executes body creation method
CN110262995A (en) * 2019-07-15 2019-09-20 北京一流科技有限公司 It executes body creation system and executes body creation method
CN110347636A (en) * 2019-07-15 2019-10-18 北京一流科技有限公司 Data execute body and its data processing method
CN110347636B (en) * 2019-07-15 2024-04-30 北京一流科技有限公司 Data execution body and data processing method thereof
CN113537937A (en) * 2021-07-16 2021-10-22 重庆富民银行股份有限公司 Task arrangement method, device and equipment based on topological sorting and storage medium
CN116150263A (en) * 2022-10-11 2023-05-23 中国兵器工业计算机应用技术研究所 Distributed graph calculation engine
CN116150263B (en) * 2022-10-11 2023-07-25 中国兵器工业计算机应用技术研究所 Distributed graph calculation engine
CN115601195A (en) * 2022-10-17 2023-01-13 桂林电子科技大学(Cn) Transaction bidirectional recommendation system and method based on real-time label of power user
CN115601195B (en) * 2022-10-17 2023-09-08 桂林电子科技大学 Transaction bidirectional recommendation system and method based on real-time label of power user

Also Published As

Publication number Publication date
CN106681820B (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN106681820A (en) Message combination based extensible big data computing method
CN107239335B (en) Job scheduling system and method for distributed system
US20200349135A1 (en) Compiling graph-based program specifications
CN105824957B (en) The query engine system and querying method of distributed memory columnar database
CN113010302B (en) Multi-task scheduling method and system under quantum-classical hybrid architecture and quantum computer system architecture
US10089087B2 (en) Executing graph-based program specifications
US9170846B2 (en) Distributed data-parallel execution engines for user-defined serial problems using branch-and-bound algorithm
US20160062736A1 (en) Specifying components in graph-based programs
CN105956021A (en) Automated task parallel method suitable for distributed machine learning and system thereof
WO2016107488A1 (en) Streaming graph optimization method and apparatus
US20130218299A1 (en) MCP Scheduling For Parallelization Of LAD/FBD Control Program In Multi-Core PLC
CN106687920A (en) Managing invocation of tasks
CN106687919A (en) Managing state for controlling tasks
CN106605209A (en) Controlling data processing tasks
Jacobs Modular verification of liveness properties of the I/O behavior of imperative programs
Sax et al. Performance optimization for distributed intra-node-parallel streaming systems
US11256486B2 (en) Method and computer program product for an UI software application
Rakadjiev et al. Parallel SMT solving and concurrent symbolic execution
Morassutto et al. Noir: design, implementation and evaluation of a streaming and batch processing framework
Falcone et al. Reactive hla-based distributed simulation systems with rxhla
Abid et al. Asynchronous coordination of stateful autonomic managers in the cloud
US20230409344A1 (en) Computer-readable recording medium storing execution control program, execution control method, and information processing device
Guo A new approach for web service composition based on semantic
US20240202043A1 (en) Dynamic subtask creation and execution in processing platforms
Councilman Extensible parallel programming in ableC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200501

Termination date: 20201230