CN107707592A - Task processing method, node and content distributing network - Google Patents

Task processing method, node and content distributing network Download PDF

Info

Publication number
CN107707592A
CN107707592A CN201710059992.9A CN201710059992A CN107707592A CN 107707592 A CN107707592 A CN 107707592A CN 201710059992 A CN201710059992 A CN 201710059992A CN 107707592 A CN107707592 A CN 107707592A
Authority
CN
China
Prior art keywords
node
task
child node
processing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710059992.9A
Other languages
Chinese (zh)
Inventor
李逸骏
苗辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou White Cloud Technology Co Ltd
Guizhou Baishancloud Technology Co Ltd
Original Assignee
Guizhou White Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou White Cloud Technology Co Ltd filed Critical Guizhou White Cloud Technology Co Ltd
Priority to CN201710059992.9A priority Critical patent/CN107707592A/en
Priority to PCT/CN2018/073799 priority patent/WO2018137611A1/en
Publication of CN107707592A publication Critical patent/CN107707592A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Abstract

The embodiments of the invention provide a kind of task processing method, node and content distributing network, first kind child node obtains waiting task of the task scheduling node distribution to the task processing node where the first kind child node;Pending data corresponding with waiting task is obtained from memory node;Pending data is split as multiple subdatas;At least one Second Type child node that task where multiple subdatas to be distributed to the first kind child node is handled in node carries out data processing;Subdata after each Second Type child node processing in task processing node where the first kind child node is merged.The setting that the present invention passes through first kind child node, Second Type child node is allowd to obtain data directly at the first kind child node, because first kind child node has carried out data fractionation, therefore each Second Type child node need to only obtain the data of processing needed for oneself at first kind child node, reduce network burden.

Description

Task processing method, node and content distributing network
Technical field
The present invention relates to distributed computing technology field, more particularly to task processing method, node and content delivery network Network.
Background technology
With the development of science and technology, distributed computing technology is also widely applied.Such as:In video code conversion field, often The transcoding task of one video is distributed into multiple processing nodes and carries out transcoding.
When existing distributed computing technology carries out task processing, each node that handles is required for obtaining from storage system respectively Complete pending data is taken, then therefrom selected part data are handled, and this undoubtedly adds network burden.When certain processing When node is to data processing failure, it is necessary to another processing node obtains complete pending data from storage system again, Equally add network burden.
How to solve the technical problem is still one technical barrier urgently to be resolved hurrily of those skilled in the art.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of task processing method, node and content distributing network, to realize Reduce network burden.Concrete technical scheme is as follows:
A kind of task processing method, applied in content distributing network, the content distributing network includes:Memory node, Task scheduling node and multiple tasks processing node, each task processing node include:First kind child node and more Individual Second Type child node, wherein, equipment corresponding to each Second Type child node is different, and methods described includes:
The first kind child node obtains the task scheduling node distribution to appointing where the first kind child node The waiting task of business processing node;
The first kind child node obtains pending data corresponding with the waiting task from the memory node;
The pending data is split as multiple subdatas by the first kind child node;
The first kind child node distributes to the multiple subdata the task processing where the first kind child node At least one Second Type child node in node carries out data processing;
The first kind child node handles the task where the first kind child node each second class in node Subdata after the processing of type child node merges.
Optionally, methods described also includes:
The data obtained after merging are sent to the memory node by the first kind child node.
Optionally, obtain the task scheduling node distribution in the first kind child node and give the first kind child node Before the waiting task of the task processing node at place, methods described also includes:
The task scheduling node obtains the work state information of each task processing node and each task handles node to institute State the range information of memory node;
The task scheduling node determines to perform waiting task according to the work state information and the range information Task processing node, by the waiting task distribute to determined by task processing node.
Optionally, the distance between each child node in each task processing node is less than the first pre-determined distance.
Optionally, where the first kind child node to be distributed to the first kind child node by the multiple subdata After at least one Second Type child node in task processing node carries out data processing, methods described also includes:
The first kind child node obtains each described second in the task processing node where the first kind child node Type child node carries out the result of data processing;
The result is the data distribution handled by the Second Type child node of failure by the first kind child node Handled to the task where the first kind child node in node in addition to the result is the Second Type child node of failure Other Second Type child nodes carry out data processing.
A kind of task handles node, and in content distributing network, the content distributing network includes:Memory node, appoint Business scheduling node and multiple task processing nodes, each task processing node include:First kind child node and Multiple Second Type child nodes, wherein, equipment corresponding to each Second Type child node is different, the first kind section Point includes:Task obtaining unit, data acquiring unit, data split cells, data allocation unit and data combination unit,
The task obtaining unit, for obtaining the task scheduling node distribution to where the first kind child node Task handles the waiting task of node;
The data acquiring unit, it is corresponding with the waiting task pending for being obtained from the memory node Data;
The data split cells, for the pending data to be split as into multiple subdatas;
The data allocation unit, for the multiple subdata to be distributed to the task where the first kind child node At least one Second Type child node handled in node carries out data processing;
The data combination unit, for by each described the in the task processing node where the first kind child node Subdata after the processing of two type child nodes merges.
Optionally, the first kind child node also includes:Data transmission unit, for the data hair that will be obtained after merging It is sent to the memory node.
Optionally, the distance between each child node in each task processing node is less than the first pre-determined distance.
Optionally, the first kind child node also includes:Result obtaining unit and reallocation unit,
The result obtaining unit, for the data allocation unit by the multiple subdata distribute to this After at least one Second Type child node in task processing node where one type child node carries out data processing, obtain Each Second Type child node that task where obtaining the first kind child node is handled in node carries out the place of data processing Manage result;
The reallocation unit, for being the data point handled by the Second Type child node of failure by the result Except the result is the Second Type child node of failure in task processing node where the dispensing first kind child node Other outer Second Type child nodes carry out data processing.
A kind of content distributing network, the content distributing network include:Memory node, task scheduling node and multiple above-mentioned Task processing node,
The task scheduling node obtains the load information of each task processing node and each task processing node is deposited described in The range information of node is stored up, determines that the performing waiting task of the task handles section according to the load information and the range information Point, the waiting task is distributed into identified task processing node;
The memory node stores pending data corresponding with the waiting task.
The embodiments of the invention provide a kind of task processing method, node and content distributing network, first kind child node Obtain waiting task of the task scheduling node distribution to the task processing node where the first kind child node;Saved from storage Pending data corresponding with waiting task is obtained in point;Pending data is split as multiple subdatas;By multiple subnumbers At least one Second Type child node in node, which is handled, according to the task where distributing to the first kind child node carries out data Processing;Subdata after each Second Type child node processing in task processing node where the first kind child node is entered Row merges.The setting that the present invention passes through first kind child node so that Second Type child node can be directly from the first kind Data are obtained at child node, because first kind child node has carried out data fractionation, therefore each Second Type child node only need to be from The data of processing needed for oneself are obtained at first kind child node, reduce network burden.Simultaneously as node and storage Data among systems the number of transmissions greatly reduces so that and the probability that data transmission procedure is influenceed by unstable networks factor is smaller, Improve the stability of data transfer.
Certainly, any product or method for implementing the present invention must be not necessarily required to reach all the above excellent simultaneously Point.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of structural representation of content distributing network provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of task processing method provided in an embodiment of the present invention;
Fig. 3 is the flow chart of another task processing method provided in an embodiment of the present invention;
Fig. 4 is the flow chart of another task processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of structural representation of first kind child node provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made Embodiment, belong to the scope of protection of the invention.
As shown in figure 1, a kind of task processing method provided in an embodiment of the present invention, applied in content distributing network, institute Stating content distributing network can include:Memory node 001, task scheduling node 003 and multiple tasks processing node 002, each The task processing node 002 includes:First kind child node 100 and multiple Second Type child nodes 200, wherein, each institute It is different to state equipment corresponding to Second Type child node 200, as shown in Fig. 2 this method can include:
S100, the first kind child node 100 obtain the task scheduling node 003 and distribute to first kind son section The waiting task of task processing node 002 where point 100;
Wherein, can be communicated between memory node 001, task scheduling node 003 and multiple tasks processing node 002 Connection.
Specifically, the distance between each child node in each task processing node can be less than first it is default away from From, or, each child node in each task processing node is located in same LAN.So, each task processing section Can is efficient between each child node in point is communicated.Wherein, task processing node can be content distributing network In cache node.
Specifically, it can be needed the multiple equipment in certain region being planned in a task processing node according to business Multiple child nodes, such as:The communication equipment of certain telecom operators in certain city is planned to more in a task processing node Individual child node.
Optionally, the task processing node 002 communicated to connect with memory node 001 can have multiple, each task processing First kind child node 100 and multiple Second Type child nodes 200 in node 002 can be respectively positioned in default geographic area.
S200, the first kind child node 100 obtain corresponding with the waiting task from the memory node 001 Pending data;
Wherein, waiting task can be video code conversion task, and pending data can be video file.
Wherein, the distance of memory node 001 and task processing node 002 can be more than the second pre-determined distance, it is possible to understand that , for cost consideration, memory node 001 and task processing node 002 are often not arranged in the same area, the two distance compared with Far.In this case, first kind child node 100 of the invention from memory node 001 after pending data is obtained, just Distribution saves from storage respectively to each Second Type child node 200 without each Second Type child node 200 after directly can splitting Point 001 obtains pending data, reduces the data volume of transmission and the occupancy to network bandwidth.
The pending data is split as multiple subdatas by S300, the first kind child node 100;
The multiple subdata is distributed to the place of first kind child node 100 by S400, the first kind child node 100 Task processing node 002 at least one Second Type child node 200 carry out data processing;
S500, the first kind child node 100 are by the task processing node 002 where the first kind child node 100 Each Second Type child node 200 handle after subdata merge.
In other embodiments of the present invention, can also include after step S500:
The data obtained after merging are sent to the memory node 001 by the first kind child node 100.
A kind of task processing method provided in an embodiment of the present invention, passes through the setting of first kind child node so that second Type child node can obtain data directly at the first kind child node, simultaneously as first kind child node is carried out Data are split, therefore each Second Type child node need to only obtain the data of processing needed for oneself i.e. at first kind child node Can, greatly reduce network burden.Simultaneously as data transmission times greatly reduce between node and storage system so that number The probability influenceed according to transmitting procedure by unstable networks factor is smaller, improves the stability of data transfer.
, can be with before step S100 as shown in figure 3, in another task processing method provided in an embodiment of the present invention Including:
S001, the task scheduling node 003 are obtained at work state information and each task of each task processing node 002 Manage the range information that node 002 arrives the memory node 001;
S002, the task scheduling node 003 determine that execution is treated according to the work state information and the range information The task processing node of processing task, the waiting task is distributed into identified task processing node.
Specifically, when first kind child node 100 can not obtain the work state information of certain Second Type child node, then First kind child node 100 determines child node of the Second Type child node for not executable task.Wherein, first kind section Point 100 can not obtain certain Second Type child node work state informations situation have it is a variety of, such as:First kind child node 100 Network connection with the Second Type child node is gone wrong, or, the Second Type child node is delayed machine.
Wherein, the working condition of Second Type child node 200 can include:The load information of Second Type child node 200 Deng.Optionally, when the load of Second Type child node 200 is less than default load, it may be determined that the Second Type child node 200 To can perform the Second Type child node of task and waiting task distributed into the child node.
Optionally, can after the waiting task is distributed to identified task processing node by task scheduling node 003 To obtain the result of the task processing node of processing waiting task, when result is fails, return and perform step S001.So, by this feedback mechanism with regard to support mission successful execution, the reliability of system is improved.
, can be with after step S400 as shown in figure 4, another task processing method provided in an embodiment of the present invention Including:
S410, the first kind child node obtain each described in the task processing node where the first kind child node Second Type child node carries out the result of data processing;
The result is the data handled by the Second Type child node of failure by S420, the first kind child node Task processing node where distributing to the first kind child node is interior except Second Type that the result is failure saves Other Second Type child nodes outside point carry out data processing.
After data are reassigned to other Second Type child nodes by step S420, step S410 can receive this other The result of two type child nodes simultaneously continues to judge whether result is success.Number is just ensured by this feedback mechanism According to being successfully processed, the reliability of system is improved, improves the success rate of tasks carrying.
Task processing method provided in an embodiment of the present invention, when Second Type child node processing data fails, Ling Yi Two type child nodes directly incite somebody to action processing failure before without obtain data again from memory node by first kind child node Subdata be sent to another Second Type child node, greatly reduce the data volume of transmission, reduce network burden, carry The high concurrently execution amount of task, improve the data throughput capabilities of system.
Corresponding with above method embodiment, the embodiment of the present invention additionally provides a kind of task processing node.
A kind of task processing node provided in an embodiment of the present invention, in content distributing network, as shown in figure 1, described Content distributing network can include:Memory node 001, task scheduling node 003 and multiple tasks processing node 002, Mei Gesuo Stating task processing node 002 includes:First kind child node 100 and multiple Second Type child nodes 200, wherein, it is each described Equipment corresponding to Second Type child node 200 is different, as shown in figure 5, the first kind child node 100 can include:Appoint Business obtaining unit 110, data acquiring unit 120, data split cells 130, data allocation unit 140 and data combination unit 150,
The task obtaining unit 110, the first kind child node is distributed to for obtaining the task scheduling node 003 The waiting task of task processing node 002 where 100;
Wherein, can be communicated between memory node 001, task scheduling node 003 and multiple tasks processing node 002 Connection.
Specifically, the distance between each child node in each task processing node can be less than first it is default away from From, such as be located at same content distributing network CDN fringe nodes, or same computer room, either same region or, each described Each child node in business processing node is located in same LAN.So, between each child node in each task processing node Can is efficient to be communicated.Wherein, task processing node can be the cache node in content distributing network.
Specifically, it can be needed the multiple equipment in certain region being planned in a task processing node according to business Multiple child nodes, such as:The communication equipment of certain telecom operators in certain city is planned to more in a task processing node Individual child node.
Optionally, the task processing node 002 communicated to connect with memory node 001 can have multiple, each task processing First kind child node 100 and multiple Second Type child nodes 200 in node 002 can be respectively positioned in default geographic area.
The data acquiring unit 120, it is corresponding with the waiting task for being obtained from the memory node 001 Pending data;
Wherein, waiting task can be video code conversion task, and pending data can be video file.
Wherein, the distance of memory node 001 and task processing node 002 can be more than the second pre-determined distance, it is possible to understand that , for cost consideration, memory node 001 and task processing node 002 are often not arranged in the same area, the two distance compared with Far.In this case, first kind child node 100 of the invention from memory node 001 after pending data is obtained, just Distribution saves from storage respectively to each Second Type child node 200 without each Second Type child node 200 after directly can splitting Point 001 obtains pending data, reduces the data volume of transmission and the occupancy to network bandwidth.
The data split cells 130, for the pending data to be split as into multiple subdatas;
The data allocation unit 140, for the multiple subdata to be distributed into the place of first kind child node 100 Task processing node 002 at least one Second Type child node 200 carry out data processing;
The data combination unit 150, for by the task processing node 002 where the first kind child node 100 Each Second Type child node 200 handle after subdata merge.
Wherein, the first kind child node 100 can also include:Result obtaining unit and reallocation unit,
The result obtaining unit, for distributing to the multiple subdata in the data allocation unit 140 At least one Second Type child node in task processing node where the first kind child node carries out data processing Afterwards, each Second Type child node that the task where obtaining the first kind child node is handled in node carries out data processing Result;
The reallocation unit, for being the data point handled by the Second Type child node of failure by the result Except the result is the Second Type child node of failure in task processing node where the dispensing first kind child node Other outer Second Type child nodes carry out data processing.
After data are reassigned to other Second Type child nodes by reallocation unit, result obtaining unit can be received To other Second Type child nodes result and continue to judge whether result is success.
In other embodiments of the present invention, the first kind child node 100 can also include:Data transmission unit, use The data obtained after by merging are sent to the memory node 001.
A kind of task processing node provided in an embodiment of the present invention, passes through the setting of first kind child node so that second Type child node can obtain data directly at the first kind child node, simultaneously as first kind child node is carried out Data are split, therefore each Second Type child node need to only obtain the data of processing needed for oneself i.e. at first kind child node Can, greatly reduce network burden.
As shown in figure 1, the embodiment of the present invention additionally provides a kind of content distributing network, can include:Memory node 001, Task processing node 002 described in task scheduling node 003 and multiple above-described embodiments,
The task scheduling node 003 obtains the work state information and each task processing section of each task processing node 002 Point 002 arrives the range information of the memory node 001, determines that execution is pending according to the load information and the range information The task processing node 002 of task, the waiting task is distributed into identified task processing node 002;
The memory node 001 stores pending data corresponding with the waiting task.
Specifically, when first kind child node 100 can not obtain the work state information of certain Second Type child node, then First kind child node 100 determines child node of the Second Type child node for not executable task.Wherein, first kind section Point 100 can not obtain certain Second Type child node work state informations situation have it is a variety of, such as:First kind child node 100 Network connection with the Second Type child node is gone wrong, or, the Second Type child node is delayed machine.
Wherein, the working condition of Second Type child node 200 can include:The load information of Second Type child node 200 Deng.Optionally, when the load of Second Type child node 200 is less than default load, it may be determined that the Second Type child node 200 To can perform the Second Type child node of task and waiting task distributed into the child node.
Specifically, task scheduling node 003 can handle the network condition, Regional Distribution, processing of node according to each task Number of nodes, processing node computing capability, loading condition etc. handle node distribution waiting task for each task.
A kind of content distributing network provided in an embodiment of the present invention, passes through the setting of first kind child node so that second Type child node can obtain data directly at the first kind child node, simultaneously as first kind child node is carried out Data are split, therefore each Second Type child node need to only obtain the data of processing needed for oneself i.e. at first kind child node Can, greatly reduce network burden.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply and deposited between these entities or operation In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to Nonexcludability includes, so that process, method, article or equipment including a series of elements not only will including those Element, but also the other element including being not expressly set out, or it is this process, method, article or equipment also to include Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Other identical element also be present in process, method, article or equipment including the key element.
Each embodiment in this specification is described by the way of related, identical similar portion between each embodiment Divide mutually referring to what each embodiment stressed is the difference with other embodiment.It is real especially for system For applying example, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent substitution and improvements made within the spirit and principles in the present invention etc., are all contained in protection scope of the present invention It is interior.

Claims (10)

1. a kind of task processing method, it is characterised in that applied in content distributing network, the content distributing network includes: Memory node, task scheduling node and multiple tasks processing node, each task processing node include:First kind Node and multiple Second Type child nodes, wherein, equipment corresponding to each Second Type child node is different, methods described bag Include:
The first kind child node obtain the task scheduling node distribution to the task where the first kind child node at Manage the waiting task of node;
The first kind child node obtains pending data corresponding with the waiting task from the memory node;
The pending data is split as multiple subdatas by the first kind child node;
The first kind child node distributes to the multiple subdata the task processing node where the first kind child node Interior at least one Second Type child node carries out data processing;
The first kind child node is sub by each Second Type in the task processing node where the first kind child node Subdata after node processing merges.
2. according to the method for claim 1, it is characterised in that methods described also includes:
The data obtained after merging are sent to the memory node by the first kind child node.
3. according to the method for claim 1, it is characterised in that obtain the task scheduling in the first kind child node Before waiting task of the node distribution to the task processing node where the first kind child node, methods described also includes:
The task scheduling node obtains the work state information of each task processing node and each task processing node is deposited described in Store up the range information of node;
The task scheduling node determines to perform appointing for waiting task according to the work state information and the range information Business processing node, the waiting task is distributed into identified task processing node.
4. according to the method for claim 1, it is characterised in that between each child node in each task processing node Distance be less than the first pre-determined distance.
5. method according to any one of claim 1 to 4, it is characterised in that in the first kind child node by described in Multiple subdatas distribute at least one Second Type in the task processing node where the first kind child node After node carries out data processing, methods described also includes:
The first kind child node obtains each Second Type in the task processing node where the first kind child node Child node carries out the result of data processing;
The first kind child node gives the result for the data distribution handled by the Second Type child node of failure should Task where first kind child node handles its in addition to the result is the Second Type child node of failure in node He carries out data processing at Second Type child node.
6. a kind of task handles node, it is characterised in that in content distributing network, the content distributing network includes:Deposit Node, task scheduling node and multiple task processing nodes are stored up, each task processing node includes:The first kind Child node and multiple Second Type child nodes, wherein, equipment corresponding to each Second Type child node is different, and described first Type child node includes:Task obtaining unit, data acquiring unit, data split cells, data allocation unit and data merge Unit,
The task obtaining unit, for obtaining the task scheduling node distribution to the task where the first kind child node Handle the waiting task of node;
The data acquiring unit, for obtaining pending number corresponding with the waiting task from the memory node According to;
The data split cells, for the pending data to be split as into multiple subdatas;
The data allocation unit, handled for the task where the multiple subdata to be distributed to the first kind child node At least one Second Type child node in node carries out data processing;
The data combination unit, for the task where the first kind child node to be handled to each second class in node Subdata after the processing of type child node merges.
7. task according to claim 6 handles node, it is characterised in that the first kind child node also includes:Number According to transmitting element, for the data obtained after merging to be sent into the memory node.
8. task according to claim 6 handles node, it is characterised in that each son in each task processing node The distance between node is less than the first pre-determined distance.
9. the task processing node according to any one of claim 6 to 8, it is characterised in that the first kind section Point also includes:Result obtaining unit and reallocation unit,
The result obtaining unit, for the multiple subdata to be distributed into the first kind in the data allocation unit After at least one Second Type child node in task processing node where type child node carries out data processing, it is somebody's turn to do Each Second Type child node in task processing node where first kind child node carries out the processing knot of data processing Fruit;
The reallocation unit, for the result to be given for the data distribution handled by the Second Type child node of failure In task processing node where the first kind child node in addition to the result is the Second Type child node of failure Other Second Type child nodes carry out data processing.
10. a kind of content distributing network, it is characterised in that the content distributing network includes:Memory node, task scheduling node And the task processing node any one of multiple claims 6 to 9,
The task scheduling node obtains the load information of each task processing node and each task handles node to the storage section The range information of point, determine that the performing waiting task of the task handles node according to the load information and the range information, The waiting task is distributed into identified task processing node;
The memory node stores pending data corresponding with the waiting task.
CN201710059992.9A 2017-01-24 2017-01-24 Task processing method, node and content distributing network Pending CN107707592A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710059992.9A CN107707592A (en) 2017-01-24 2017-01-24 Task processing method, node and content distributing network
PCT/CN2018/073799 WO2018137611A1 (en) 2017-01-24 2018-01-23 Task processing method, node, medium, device and content distribution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710059992.9A CN107707592A (en) 2017-01-24 2017-01-24 Task processing method, node and content distributing network

Publications (1)

Publication Number Publication Date
CN107707592A true CN107707592A (en) 2018-02-16

Family

ID=61169448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710059992.9A Pending CN107707592A (en) 2017-01-24 2017-01-24 Task processing method, node and content distributing network

Country Status (2)

Country Link
CN (1) CN107707592A (en)
WO (1) WO2018137611A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214132A (en) * 2018-10-30 2019-01-15 中国运载火箭技术研究院 A kind of big Throughput Asynchronous task processing system of non-coupled streaming towards LVC emulation
WO2021179522A1 (en) * 2020-03-13 2021-09-16 平安国际智慧城市科技股份有限公司 Computing resource allocation system, method, and apparatus, and computer device
CN114845141A (en) * 2022-04-18 2022-08-02 上海哔哩哔哩科技有限公司 Edge transcoding method and device
CN116743752A (en) * 2023-08-11 2023-09-12 山东恒宇电子有限公司 System for realizing data processing load balance by distributed network communication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279351A (en) * 2013-05-31 2013-09-04 北京高森明晨信息科技有限公司 Method and device for task scheduling
CN104102475A (en) * 2013-04-11 2014-10-15 腾讯科技(深圳)有限公司 Method, device and system for processing distributed type parallel tasks
CN105893497A (en) * 2016-03-29 2016-08-24 杭州数梦工场科技有限公司 Task processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638456B (en) * 2012-03-19 2015-09-23 杭州海康威视数字技术股份有限公司 Based on magnanimity real-time video code stream intelligent analysis method and the system thereof of cloud computing
CN104244025A (en) * 2013-06-09 2014-12-24 北京同步科技有限公司 Cluster transcoding system and method thereof
CN103685558B (en) * 2013-12-30 2016-09-14 常州大学 A kind of wireless sensor network and task processing method thereof
CN104239144A (en) * 2014-09-22 2014-12-24 珠海许继芝电网自动化有限公司 Multilevel distributed task processing system
CN104378665B (en) * 2014-11-24 2018-02-27 深圳市天威视讯股份有限公司 A kind of distributed trans-coding system and method based on DTV

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102475A (en) * 2013-04-11 2014-10-15 腾讯科技(深圳)有限公司 Method, device and system for processing distributed type parallel tasks
CN103279351A (en) * 2013-05-31 2013-09-04 北京高森明晨信息科技有限公司 Method and device for task scheduling
CN105893497A (en) * 2016-03-29 2016-08-24 杭州数梦工场科技有限公司 Task processing method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214132A (en) * 2018-10-30 2019-01-15 中国运载火箭技术研究院 A kind of big Throughput Asynchronous task processing system of non-coupled streaming towards LVC emulation
CN109214132B (en) * 2018-10-30 2023-06-30 中国运载火箭技术研究院 LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system
WO2021179522A1 (en) * 2020-03-13 2021-09-16 平安国际智慧城市科技股份有限公司 Computing resource allocation system, method, and apparatus, and computer device
CN114845141A (en) * 2022-04-18 2022-08-02 上海哔哩哔哩科技有限公司 Edge transcoding method and device
CN116743752A (en) * 2023-08-11 2023-09-12 山东恒宇电子有限公司 System for realizing data processing load balance by distributed network communication

Also Published As

Publication number Publication date
WO2018137611A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
CN107707592A (en) Task processing method, node and content distributing network
CN102486739B (en) Method and system for distributing data in high-performance computer cluster
CN104731647B (en) Task processing method and system
CN105824810B (en) A kind of Hadoop group system and data processing method
CN102655503A (en) Resource allocation using shared resource pools
CN104699542B (en) Task processing method and system
CN106844405B (en) Data query method and apparatus
CN108684046B (en) Random learning-based access network service function chain deployment method
Berman et al. Optimal 2-facility network districting in the presence of queuing
CN105068755B (en) A kind of data trnascription storage method towards cloud computing content distributing network
CN107633358B (en) Facility site selection and distribution method and device
van der Boor et al. Load balancing in large-scale systems with multiple dispatchers
CN109064243A (en) Source of houses searching method
CN109379281A (en) A kind of traffic scheduling method and system based on time window
CN105491078B (en) Data processing method and device, SOA system in SOA system
Jahromi et al. Online VNF placement and chaining for value-added services in content delivery networks
CN106533986B (en) A kind of path allocation methodology and device based on period reserved bandwidth resource
CN105786447A (en) Method and apparatus for processing data by server and server
CN104811403A (en) Openflow-based group table processing method and device and group table configuration unit
CN109309646A (en) A kind of multi-media transcoding method and system
CN107743246A (en) Task processing method, system and data handling system
CN109041236B (en) Wireless resource allocation method and device for services with different weights
CN110544159B (en) Map information processing method and device, readable storage medium and electronic equipment
CN105099753B (en) The method of Network Management System and its processing business
CN111770179B (en) High-performance high-availability cloud networking gateway implementation method, medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180216

RJ01 Rejection of invention patent application after publication