CN109688222A - The dispatching method of shared computing resource, shared computing system, server and storage medium - Google Patents

The dispatching method of shared computing resource, shared computing system, server and storage medium Download PDF

Info

Publication number
CN109688222A
CN109688222A CN201811601521.7A CN201811601521A CN109688222A CN 109688222 A CN109688222 A CN 109688222A CN 201811601521 A CN201811601521 A CN 201811601521A CN 109688222 A CN109688222 A CN 109688222A
Authority
CN
China
Prior art keywords
shared
calculate node
computing resource
node
calculating task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811601521.7A
Other languages
Chinese (zh)
Other versions
CN109688222B (en
Inventor
李�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Onething Technology Co Ltd
Original Assignee
Shenzhen Onething Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Onething Technology Co Ltd filed Critical Shenzhen Onething Technology Co Ltd
Priority to CN201811601521.7A priority Critical patent/CN109688222B/en
Publication of CN109688222A publication Critical patent/CN109688222A/en
Priority to PCT/CN2019/092458 priority patent/WO2020133967A1/en
Application granted granted Critical
Publication of CN109688222B publication Critical patent/CN109688222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Abstract

The invention discloses a kind of dispatching methods of shared computing resource, this method comprises: obtaining pending shared calculating task;Obtain all alternative shared calculate node lists;Selection and the shared matched shared calculate node of calculating task from the shared calculate node list;The shared calculating task is issued to the described and shared matched shared calculate node of calculating task.The present invention also provides a kind of shared computing system, server and storage mediums.The present invention can select suitably shared calculate node according to the resource requirement of user, and cope with the fluctuation of node in real time and make corresponding scheduling.

Description

The dispatching method of shared computing resource, shared computing system, server and storage medium
Technical field
The present invention relates to shared computing technique field more particularly to a kind of dispatching methods of shared computing resource, shared meter Calculation system, server and storage medium.
Background technique
There are many enterprises to need using massive band width, disk, cpu resource come to be distributed in different geographical heterogeneous networks at present The service of high speed is stablized in user's offer under environment, while the resources such as the bandwidth of home environment and storage are left unused in the presence of very big, By the Intelligent hardware that is deployed in subscriber household as home (e) node, these can sufficiently be used by building a set of shared computing system The cost of serving of enterprise can be greatly reduced in resource.Home (e) node has following characteristics: 1, it is large number of, it may up to 100,000, million The even higher order of magnitude;2, the stability of home (e) node is lower than server node;It 3, is public network interconnection, the IP of node between node Address is dynamic change;4, the physical resource that individual node is possessed seldom and also real-time fluctuations.
Under above-mentioned mode, flexible and efficient management, which is core point, to be accomplished to the resource that Intelligent hardware is collected into, it is desirable that can be fast Speed disposes different business procedure, and does resource management and security control to business procedure, while according to business to each node Resource service condition make Real-Time Scheduling, maximally utilize the physical resource of node.Home network is deployed in million or more Node under environment takes out virtual calculating, storage, Internet resources, and there is no mature schemes for industry at present.
Summary of the invention
In view of this, the present invention proposes a kind of dispatching method of shared computing resource, shares computing system, server and deposit Storage media, to solve at least one above-mentioned technical problem.
Firstly, to achieve the above object, the present invention proposes a kind of dispatching method of shared computing resource, which is characterized in that The described method includes:
Obtain pending shared calculating task;
Obtain all alternative shared calculate node lists;
Selection and the shared matched shared calculate node of calculating task from the shared calculate node list;
The shared calculating task is issued to the described and shared matched shared calculate node of calculating task.
Optionally, the shared calculate node list includes the ID of each shared calculate node, available resources data;
The shared calculating task includes the demand of the shared computing resource needed to configure;
The selection from the shared calculate node list and the shared matched shared calculate node of calculating task Include:
According to the demand of the shared computing resource needed to configure and the available resources data of each shared calculate node, Selection and the shared matched shared calculate node of calculating task from the shared calculate node list.
Optionally, the demand of the shared computing resource includes: bandwidth demand, memory space requirements and computational resource requirements At least one of.
Optionally, the available resources data in the shared calculate node list are to be uploaded according to each shared calculate node Node real-time status, the data that generate when executing task on task status and node are calculated.
Optionally, the demand of the shared computing resource needed to configure according to and each shared calculate node can With resource data, selection and the shared matched shared calculate node packet of calculating task from the shared calculate node list It includes:
Obtain the available resources data that calculate node is each shared in the shared calculate node list;
The available resources data are selected to reach the shared calculate node of preset value from the shared calculate node list, Generate enabled node list;
According to each shared calculate node marking that pre-set level is in the enabled node list, torn open using bin packing algorithm Shared calculate node of the demand of the shared computing resource needed to configure described in point to marking score value more than preset threshold, obtains most Whole matched node list.
Optionally, the demand of the shared computing resource needed to configure according to and each shared calculate node can With resource data, selected with the shared matched shared calculate node of calculating task also from the shared calculate node list Include:
The current available resources data of the selected shared calculate node of timing acquisition;
According to the current available money of the demand of the shared computing resource needed to configure and the shared calculate node Source data judges whether to need to carry out node additions and deletions.
Optionally, the pre-set level includes region resource surplus, history stability.
Optionally, the pending shared calculating task of the acquisition includes: to obtain according to pending shared calculating task The docker mirror image of generation.
Optionally, described that the shared calculating task is issued to the described and shared matched shared meter of calculating task Operator node include: docker mirror image corresponding with the shared calculating task is issued to it is described with the shared calculating task Matched shared calculate node.
In addition, to achieve the above object, the present invention also provides a kind of server, the server includes memory, processing Device is stored with the scheduler program for the shared computing resource that can be run on the processor, the shared meter on the memory The scheduler program of calculation resource realizes the dispatching method such as above-mentioned shared computing resource when being executed by the processor.
Further, to achieve the above object, the present invention also provides a kind of shared computing system, the system comprises:
Role management unit is sent for receiving pending shared calculating task from client, and to dispatch service unit Send out shared calculating task described;
The dispatch service unit, for obtaining the shared calculating task from the role management unit, according to node The state and historical data for each shared calculate node that administrative unit and data warehouse provide obtain all alternative shared meters Operator node list, and selection is saved with the shared matched shared calculating of calculating task from the shared calculate node list Point;
Deployment services unit, for selected matched total with the shared calculating task to the dispatch service unit It enjoys calculate node and issues the shared calculating task.
Further, to achieve the above object, the present invention also provides a kind of storage medium, the storage medium is stored with altogether The scheduler program of computing resource is enjoyed, the scheduler program of the shared computing resource can be executed by least one processor, so that institute State the dispatching method that at least one processor executes such as above-mentioned shared computing resource.
The dispatching method of shared computing resource proposed by the invention, shared computing system, server and storage medium, can To be managed collectively the Docker cluster that the shared calculate node of million magnitudes forms, according to resource allocation needed for shared calculating task The shared calculate node to match with the task, and node scheduling is carried out according to node state variation at any time, maintain total resources It is steady.
Detailed description of the invention
Fig. 1 is a kind of configuration diagram for shared computing system that first embodiment of the invention proposes;
Fig. 2 is a kind of configuration diagram for dispatch server that second embodiment of the invention proposes;
Fig. 3 is a kind of flow diagram of the dispatching method for shared computing resource that third embodiment of the invention proposes;
Fig. 4 is the refinement flow diagram of S24 in Fig. 3.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
It should be noted that the description for being related to " first ", " second " etc. in the present invention is used for description purposes only, and cannot It is interpreted as its relative importance of indication or suggestion or implicitly indicates the quantity of indicated technical characteristic.Define as a result, " the One ", the feature of " second " can explicitly or implicitly include at least one of the features.In addition, the skill between each embodiment Art scheme can be combined with each other, but must be based on can be realized by those of ordinary skill in the art, when technical solution Will be understood that the combination of this technical solution is not present in conjunction with there is conflicting or cannot achieve when, also not the present invention claims Protection scope within.
First embodiment
As shown in fig.1, first embodiment of the invention proposes a kind of shared computing system.Above-mentioned shared computing system is to make With a set of IaaS ((Infrastructure as a Service, infrastructure service) system of distributed node resource construction System, core function is the resource requirement according to user, selects suitable node and carries out light weight virtualization, carries the journey of user Sequence logic copes with the fluctuation such as network site, bandwidth, storage of node in real time and makes corresponding scheduling and adjustment.
In the present embodiment, sharing computing system 1 includes server 10 and shared calculate node 19.Above-mentioned server 10 wraps Include role management unit 11, dispatch service unit 12, node management unit 13, data warehouse 14, deployment services unit 15 and mirror As warehouse 17.Above-mentioned shared computing system 1 carries out data communication by network with client 2, for what is initiated according to client 2 Shared distribution of computation tasks shares calculate node 19 accordingly, to execute the shared calculating task.
Client 2 is used to select the specification and capacity and pending programmed logic of required resource, is patrolled according to above procedure It collects and automatically generates Docker (application container engine) mirror image, and the required Resource Encapsulation of selection is appointed at shared calculate of standardization Business.In the present embodiment, user can pass through management console, CLI (Command-line Interface, life in client 2 Enable row interface) tool, API (Application Programming Interface, application programming interface) interface calling Equal various ways, the specification and capacity (such as amount of bandwidth, amount of storage etc.) of resource needed for selecting select pending programmed logic (can be realized with multilingual) is automatically generated the programmed logic after debugging platform and cross compile platform processes Docker mirror image.For example, resource requirement is 100Gbps bandwidth, the amount of storage of 10PB, the logical code of execution is hello.py. Meanwhile the user of client 2 can also carry out the control such as start and stop additions and deletions to above procedure logic.Client 2 will be needed for selection After Resource Encapsulation standardization task, which is forwarded to role management unit 11.And user patrols in the program that client 2 is selected Standardized Docker mirror image can be encapsulated as by collecting, and shielded programming language and performing environment difference, be then forwarded to mirror image warehouse 17.
After role management unit 11 is used to receive above-mentioned task from client 2, task is distributed to dispatch service unit 12.? In the present embodiment, received task can be discharged into the stream of a plurality of parallelization by role management unit 11 according to priority and the degree of association Waterline, dispatch service unit 12 obtain task from the assembly line in order.
Dispatch service unit 12 is used to obtain task from role management unit 11, and according to node management unit 13 and data State and the historical data selection for each shared calculate node 19 that warehouse 14 provides and the shared calculating task are matched shared Calculate node 19.Dispatch service unit 12 chooses node and needs to rely on the real-time of the full dose node obtained from node management unit 13 State, and the historical data (such as history stability of node etc.) of the node obtained from data warehouse 14 and task.It lifts For example, dispatch service unit 12 obtains current all alternative shared calculate node lists, above-mentioned shared calculate node first List includes the ID of each shared calculate node 19, available resources data, and above-mentioned available resources data can be according to each shared meter The data generated when executing task on node real-time status, task status and the node that operator node 19 uploads are calculated.Then, Dispatch service unit 12 split the task resource requirement, according to region, ISP (Internet Service Provider, mutually The Internet services provider), NAT (Network Address Translation, network address translation) type, bandwidth, storage it is empty Between, the selections such as computing resource reach the enabled node list of preset value.It is finally pre- according to region resource surplus, history stability etc. If index is that each shared calculate node 19 in the enabled node list is given a mark, according to resources costs, pressed using bin packing algorithm According to maximum resource utilization principle, split the demand of the shared computing resource that the required by task to be configured to marking score value be more than pre- If the shared calculate node 19 of threshold value selects final matched node list.In addition, in the shared calculate node 19 that ought have been chosen After passing node real-time status and task status (thus obtaining current available resource data), dispatch service unit 12 is also used into one Step determines whether to carry out node additions and deletions.
Node management unit 13 is used to receive node real-time status and task status that each shared calculate node 19 uploads And it is supplied to dispatch service unit 12 and is scheduled.
Data warehouse 14 is used to receive the data generated when the execution task that each shared calculate node 19 uploads and offer It is scheduled to dispatch service unit 12.
Deployment services unit 15 is used to issue the task of deployment to the shared calculate node 19 that dispatch service unit 12 is chosen.
Mirror image warehouse 17 is used to receive the Docker mirror image of the generation of client 2, and provides to shared calculate node 19 Docker mirror image.
Shared calculate node 19 is used to receive the task that deployment service unit 15 is disposed and execution, downloads from mirror image warehouse 17 Corresponding Docker mirror image starts image instance, and will be in the data that generated on node real-time status, task status and node It passes.In the present embodiment, it shares calculate node 19 and downloads Docker mirror image from mirror image warehouse 17, it in other embodiments, can be with The Docker mirror image that other shared calculate nodes 19 have been downloaded is obtained by the P2P transmission shared between calculate node 19.When After downloading above-mentioned Docker mirror image, calculate node can also be shared to other by P2P and transmit the Docker mirror image.
Further, above-mentioned shared computing system 1 further include:
Signaling gateway 16, mission dispatching for disposing deployment services unit 15 to corresponding shared calculate node 19, And node real-time status and task status that shared calculate node 19 uploads are received, and be sent to node management unit 13.
Data gateway 18, for transmitting Docker mirror image, and the shared calculate node 19 of reception to shared calculate node 19 The data generated in the Docker example implementation procedure of upload, and it is uploaded to data warehouse 14.
The transmission of above-mentioned signaling and data using content distributing network (Content Delivery Network, CDN) into Mobile state accelerates.
Further, sharing calculate node 19 includes local signaling proxy 190, local data agency's 192 and Docker pipe Manage device 194.192 and are acted on behalf of by the local signaling proxy 190, the local data that are deployed in each shared calculate node 19 Docker manager 194, to carry out virtualization segmentation and management to node resource, while acquisition node and task status in real time, And the data generated on node.
Local signaling proxy 190 is used to receive signaling (such as task of deployment) from signaling gateway 16, parses signaling, transmitting To Docker manager 194, and to 16 uploading nodes real-time status of signaling gateway and task status.Docker manager 194 For downloading Docker mirror image according to local 190 received task of signaling proxy, loads and start image instance.Local data Agency 192 is for receiving the Docker mirror image downloaded from mirror image warehouse 17 from data gateway 18 or being total to by P2P transmission from other It enjoys calculate node 19 and obtains Docker mirror image, and upload the data generated in Docker example implementation procedure, such as Docker Result, log, Core Dump (Coredump) for generating in example implementation procedure etc., above-mentioned data are subsequent to can be used as the node Historical data when dispatch service unit 12 is scheduled as reference.When partial sharing calculate node 19 has downloaded Docker After mirror image, 192 can be acted on behalf of by local data and carry out P2P diffusion, reduce the download bandwidth pressure of data gateway 18.
Shared computing system 1 provided in this embodiment, can be to resource-constrained home intelligent hardware using Docker's Mode carries out light weight virtualization, and the Docker cluster of the common network node composition of million magnitudes of unified management has transprovincial across fortune Seek the cluster management and fault-tolerant ability of quotient.The transmission of signaling and data carries out dynamic acceleration with CDN network, and Docker mirror image passes through P2P mode spreads distribution, improves distribution efficiency, saves server-side bandwidth.The Docker mirror image that shared calculate node 19 carries is real Example is in public network environment, NAT type, operator, the region meeting dynamic change of node, and dispatch service unit 12 passes through bin packing algorithm Moment is carrying out node increase and decrease, can maintain the steady of total resources.
Second embodiment
As shown in fig.2, second embodiment of the invention proposes a kind of server 10.
Above-mentioned server 10 includes: memory 21, processor 23, network interface 25 and communication bus 27.Wherein, network connects Mouth 25 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).Communication bus 27 is for realizing these Connection communication between component.
Memory 21 includes at least a type of readable storage medium storing program for executing.The readable storage medium storing program for executing of above-mentioned at least one type It can be the non-volatile memory medium of such as flash memory, hard disk, multimedia card, card-type memory.In some embodiments, above-mentioned to deposit Reservoir 21 can be the internal storage unit of server 10, such as the hard disk of the server 10.In further embodiments, above-mentioned Memory 21 is also possible to the plug-in type hard disk being equipped on the external memory unit of server 10, such as server 10, intelligently deposits Card storage (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) Deng.
Above-mentioned memory 21 can be used for storing the application software and Various types of data for being installed on server 10, such as shared meter Calculate the related data generated in the program code and its operational process of the scheduler program 20 of resource.
Processor 23 can be a central processing unit, microprocessor or other data processing chips in some embodiments, Program code or processing data for being stored in run memory 21.
Fig. 2 illustrates only the server 10 with component 21-27 and the scheduler program of shared computing resource 20, but answers Understand, Fig. 2 does not show that all components of server 10, can substitute and implement more or less component.
In 10 embodiment of server shown in Fig. 2, stored altogether as in a kind of memory 21 of computer storage medium The program code of the scheduler program 20 of computing resource is enjoyed, processor 23 executes the journey of the scheduler program 20 of above-mentioned shared computing resource When sequence code, following method is realized:
(1) pending shared calculating task is obtained.
(2) all alternative shared calculate node lists are obtained.
(3) selection and the shared matched shared calculate node 19 of calculating task in calculate node list are shared from this.
(4) this is shared calculating task and is issued to and share the matched shared calculate node 19 of calculating task with this.
The detailed description of the above method please refers to following 3rd embodiments, and details are not described herein.
3rd embodiment
As shown in fig.3, third embodiment of the invention proposes a kind of dispatching method of shared computing resource, it is applied to above-mentioned Server 10.In the present embodiment, the execution sequence of the step in flow chart shown in Fig. 3 can change according to different requirements, Become, certain steps can be omitted.This method comprises:
S20 obtains pending shared calculating task.
In the present embodiment, above-mentioned shared calculating task includes the demand of the shared computing resource needed to configure.It is above-mentioned total The demand for enjoying computing resource includes at least one of bandwidth demand, memory space requirements and computational resource requirements.When user exists After client 2 selects specification and capacity and the pending programmed logic of required resource, client 2 according to above procedure logic from It is dynamic to generate Docker mirror image, and by the required Resource Encapsulation of selection at standardization task.Then, client 2 submits the task To role management unit 11, which is forwarded to mirror image warehouse 17.Role management unit 11 can according to priority and Received task is discharged into the assembly line of a plurality of parallelization by the degree of association, and dispatch service unit 12 obtains in order from the assembly line Take task.
S22 obtains all alternative shared calculate node lists.
In the present embodiment, above-mentioned shared calculate node list includes the ID of each shared calculate node 19, available resources number According to node real-time status, task status and the section that above-mentioned available resources data can be uploaded according to each shared calculate node 19 The data generated when executing task on point are calculated.Node management unit 13 receives the section that each shared calculate node 19 uploads It puts real-time status and task status and is supplied to dispatch service unit 12 and be scheduled.Data warehouse 14 receives each shared calculating The data for the generation that node 19 uploads simultaneously are supplied to dispatch service unit 12 and are scheduled.Dispatch service unit 12 is chosen node and is needed Rely on the real-time status of the full dose node obtained from node management unit 13, and the node that is obtained from data warehouse 14 and The historical data (such as history stability of node etc.) of task.
S24 shares selection and the shared matched shared calculate node 19 of calculating task in calculate node list from this.
Dispatch service unit 12 is according to the demand and each shared calculate node of the above-mentioned shared computing resource needed to configure 19 available resources data share selection and the shared matched shared calculate node of calculating task in calculate node list from this 19.For example, dispatch service unit 12 obtains current all alternative shared calculate node lists first, then splits this The resource requirement of business, according to the selections such as region, ISP, NAT type, bandwidth, memory space, computing resource reach preset value can It is finally each in the enabled node list according to pre-set levels such as region resource surplus, history stability with node listing Shared calculate node 19 is given a mark, and according to resources costs, using bin packing algorithm according to maximum resource utilization principle, splits the task Shared calculate node 19 of the demand of the shared computing resource of required configuration to marking score value more than preset threshold is selected final Matched node list.In addition, the 19 uploading nodes real-time status of shared calculate node and task status that ought choose are (to obtain To current available resource data) after, dispatch service unit 12 is also used to further determine whether to carry out node additions and deletions.
As shown in fig.3, being the refinement flow diagram of above-mentioned S24.The refinement process includes:
S240 obtains the available resources data that calculate node 19 is each shared in the shared calculate node list.
S242 shares the shared calculate node for selecting available resources data to reach preset value in calculate node list from this 19, generate enabled node list.
S244 is each shared marking of calculate node 19 in the enabled node list according to pre-set level, using vanning Algorithm splits shared calculating section of the demand for the shared computing resource that the task needs to configure to marking score value more than preset threshold Point 19, obtains final matched node list.
S246, the current available resources data of the selected shared calculate node 19 of timing acquisition.
S248, according to the current available resources number of the demand of above-mentioned shared computing resource and above-mentioned shared calculate node 19 It is judged that whether needing to carry out node additions and deletions.For example, when down status variation, NAT type or operator's variation, disk on node When situations such as storage change, task load variation, occurs, it may be necessary to additions and deletions node.
This is shared calculating task and is issued to and shares the matched shared calculate node 19 of calculating task with this by S26.
When dispatch service unit 12 selects shared calculate node 19 after, it can will be got from role management unit 11 Above-mentioned task distributes to each selected shared calculate node 19, then will be each selected by deployment services unit 15 The mission dispatching that shared calculate node 19 is assigned to is to corresponding shared calculate node 19.
Shared calculate node 19 receives issued task and execution, downloads corresponding Docker mirror from mirror image warehouse 17 Picture starts image instance, and the data generated on node real-time status, task status and node is uploaded.
The dispatching method of shared computing resource provided in this embodiment can use resource-constrained home intelligent hardware The mode of Docker carries out light weight virtualization, the Docker cluster of the common network node composition of million magnitudes of unified management, have across The cluster management and fault-tolerant ability of province cross operator.The Docker image instance that shared calculate node 19 carries is in public network ring Border, NAT type, operator, the region meeting dynamic change of node, dispatch service unit 12 are being saved by the bin packing algorithm moment Point increase and decrease, can maintain the steady of total resources.
Fourth embodiment
The present invention also provides another embodiments, that is, provide a kind of computer readable storage medium, above-mentioned computer Readable storage medium storing program for executing is stored with the scheduler program 20 of shared computing resource, and the scheduler program 20 of above-mentioned shared computing resource can be by extremely A few processor executes, so that at least one above-mentioned processor executes the dispatching method such as above-mentioned shared computing resource.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a client (can be mobile phone, computer, electronics Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (12)

1. a kind of dispatching method of shared computing resource, which is characterized in that the described method includes:
Obtain pending shared calculating task;
Obtain all alternative shared calculate node lists;
Selection and the shared matched shared calculate node of calculating task from the shared calculate node list;
The shared calculating task is issued to the described and shared matched shared calculate node of calculating task.
2. the dispatching method of shared computing resource as described in claim 1, which is characterized in that the shared calculate node list ID, available resources data including each shared calculate node;
The shared calculating task includes the demand of the shared computing resource needed to configure;
The selection from the shared calculate node list with the shared matched shared calculate node of calculating task includes:
According to the demand of the shared computing resource needed to configure and the available resources data of each shared calculate node, from institute State selection and the shared matched shared calculate node of calculating task in shared calculate node list.
3. the dispatching method of shared computing resource as claimed in claim 2, which is characterized in that the need of the shared computing resource Ask includes: at least one of bandwidth demand, memory space requirements and computational resource requirements.
4. the dispatching method of shared computing resource as claimed in claim 2, which is characterized in that the shared calculate node list In available resources data be to be executed in the node real-time status, task status and node uploaded according to each shared calculate node The data generated when task are calculated.
5. the dispatching method of shared computing resource as claimed in claim 2, which is characterized in that described to be needed to configure according to Shared computing resource demand and each shared calculate node available resources data, from the shared calculate node list Selection with the shared matched shared calculate node of calculating task includes:
Obtain the available resources data that calculate node is each shared in the shared calculate node list;
It selects the available resources data to reach the shared calculate node of preset value from the shared calculate node list, generates Enabled node list;
According to each shared calculate node marking that pre-set level is in the enabled node list, institute is split using bin packing algorithm Shared calculate node of the demand of the shared computing resource needed to configure to marking score value more than preset threshold is stated, is obtained final Matched node list.
6. the dispatching method of shared computing resource as claimed in claim 5, which is characterized in that described to be needed to configure according to Shared computing resource demand and each shared calculate node available resources data, from the shared calculate node list Selection and the shared matched shared calculate node of calculating task further include:
The current available resources data of the selected shared calculate node of timing acquisition;
According to the current available resources number of the demand of the shared computing resource needed to configure and the shared calculate node It is judged that whether needing to carry out node additions and deletions.
7. the dispatching method of shared computing resource as claimed in claim 5, which is characterized in that the pre-set level includes region Resource excess, history stability.
8. the dispatching method of shared computing resource according to claim 1, which is characterized in that described to obtain pending be total to Enjoying calculating task includes: the docker mirror image for obtaining and being generated according to pending shared calculating task.
9. the dispatching method of shared computing resource according to claim 8, which is characterized in that described by the shared calculating Mission dispatching to it is described with the shared matched shared calculate node of calculating task include: by with the shared calculating task phase Corresponding docker mirror image is issued to the described and shared matched shared calculate node of calculating task.
10. a kind of server, which is characterized in that the server includes memory, processor, and being stored on the memory can The scheduler program of the shared computing resource run on the processor, the scheduler program of the shared computing resource is by the place It manages when device executes and realizes such as the described in any item methods of claim 1-9.
11. a kind of shared computing system, which is characterized in that the system comprises:
Role management unit distributes institute for receiving pending shared calculating task from client, and to dispatch service unit State shared calculating task;
The dispatch service unit, for obtaining the shared calculating task from the role management unit, according to node administration The state and historical data for each shared calculate node that unit and data warehouse provide obtain all alternative shared calculating sections Point list, and selection and the shared matched shared calculate node of calculating task from the shared calculate node list;
Deployment services unit, by the dispatch service unit it is selected with the shared calculating task it is matched it is shared based on Operator node issues the shared calculating task.
12. a kind of storage medium, the storage medium is stored with the scheduler program of shared computing resource, the shared computing resource Scheduler program can be executed by least one processor so that at least one described processor is executed as appointed in claim 1-9 The dispatching method of shared computing resource described in one.
CN201811601521.7A 2018-12-26 2018-12-26 Shared computing resource scheduling method, shared computing system, server and storage medium Active CN109688222B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811601521.7A CN109688222B (en) 2018-12-26 2018-12-26 Shared computing resource scheduling method, shared computing system, server and storage medium
PCT/CN2019/092458 WO2020133967A1 (en) 2018-12-26 2019-06-24 Method for scheduling shared computing resources, shared computing system, server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811601521.7A CN109688222B (en) 2018-12-26 2018-12-26 Shared computing resource scheduling method, shared computing system, server and storage medium

Publications (2)

Publication Number Publication Date
CN109688222A true CN109688222A (en) 2019-04-26
CN109688222B CN109688222B (en) 2020-12-25

Family

ID=66189634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811601521.7A Active CN109688222B (en) 2018-12-26 2018-12-26 Shared computing resource scheduling method, shared computing system, server and storage medium

Country Status (2)

Country Link
CN (1) CN109688222B (en)
WO (1) WO2020133967A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110649958A (en) * 2019-09-05 2020-01-03 北京百度网讯科技有限公司 Method, apparatus, device and medium for processing data
CN110661646A (en) * 2019-08-06 2020-01-07 上海孚典智能科技有限公司 Computing service management technology for high-availability Internet of things
CN110677464A (en) * 2019-09-09 2020-01-10 深圳市网心科技有限公司 Edge node device, content distribution system, method, computer device, and medium
CN111126895A (en) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 Management warehouse and scheduling method for scheduling intelligent analysis algorithm in complex scene
WO2020133967A1 (en) * 2018-12-26 2020-07-02 深圳市网心科技有限公司 Method for scheduling shared computing resources, shared computing system, server, and storage medium
CN112015521A (en) * 2020-09-30 2020-12-01 北京百度网讯科技有限公司 Configuration method and device of inference service, electronic equipment and storage medium
CN112068954A (en) * 2020-08-18 2020-12-11 弥伦工业产品设计(上海)有限公司 Method and system for scheduling network computing resources
CN112394944A (en) * 2019-08-13 2021-02-23 阿里巴巴集团控股有限公司 Distributed development method, device, storage medium and computer equipment
CN112702306A (en) * 2019-10-23 2021-04-23 ***通信有限公司研究院 Intelligent service sharing method, device, equipment and storage medium
CN112738174A (en) * 2020-12-23 2021-04-30 中国人民解放军63921部队 Cross-region multi-task data transmission method and system for private network

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949394A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Method, system and storage medium for sharing computing power resource
CN112199193A (en) * 2020-09-30 2021-01-08 北京达佳互联信息技术有限公司 Resource scheduling method and device, electronic equipment and storage medium
CN112540836A (en) * 2020-12-11 2021-03-23 光大兴陇信托有限责任公司 Service scheduling management method and system
CN112799742B (en) * 2021-02-09 2024-02-13 上海海事大学 Machine learning practical training system and method based on micro-service
US20220374215A1 (en) * 2021-05-20 2022-11-24 International Business Machines Corporation Generative experiments for application deployment in 5g networks

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917077A (en) * 2012-11-20 2013-02-06 无锡城市云计算中心有限公司 Resource allocation method in cloud computing system
CN102938790A (en) * 2012-11-20 2013-02-20 无锡城市云计算中心有限公司 Resource allocation method of cloud computing system
CN105791447A (en) * 2016-05-20 2016-07-20 北京邮电大学 Method and device for dispatching cloud resource orienting to video service
CN106371889A (en) * 2016-08-22 2017-02-01 浪潮(北京)电子信息产业有限公司 Method and device for realizing high-performance cluster system for scheduling mirror images
CN106919445A (en) * 2015-12-28 2017-07-04 华为技术有限公司 A kind of method and apparatus of the container of Parallel Scheduling in the cluster
CN107105029A (en) * 2017-04-18 2017-08-29 北京友普信息技术有限公司 A kind of CDN dynamic contents accelerated method and system based on Docker technologies
CN107239329A (en) * 2016-03-29 2017-10-10 西门子公司 Unified resource dispatching method and system under cloud environment
CN107566443A (en) * 2017-07-12 2018-01-09 郑州云海信息技术有限公司 A kind of distributed resource scheduling method
US20180034856A1 (en) * 2016-07-27 2018-02-01 International Business Machines Corporation Compliance configuration management
CN107733977A (en) * 2017-08-31 2018-02-23 北京百度网讯科技有限公司 A kind of cluster management method and device based on Docker
CN107819802A (en) * 2016-09-13 2018-03-20 华为软件技术有限公司 A kind of mirror image acquisition methods, node device and server in node cluster
CN107844376A (en) * 2017-11-21 2018-03-27 北京星河星云信息技术有限公司 Resource allocation method, computing system, medium and the server of computing system
CN108563500A (en) * 2018-05-08 2018-09-21 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media
CN108628674A (en) * 2018-05-11 2018-10-09 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media
CN109062658A (en) * 2018-06-29 2018-12-21 优刻得科技股份有限公司 Realize dispatching method, device, medium, equipment and the system of computing resource serviceization

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9268613B2 (en) * 2010-12-20 2016-02-23 Microsoft Technology Licensing, Llc Scheduling and management in a personal datacenter
CN104506600A (en) * 2014-12-16 2015-04-08 苏州海博智能***有限公司 Computation resource sharing method, device and system as well as client side and server
WO2018067047A1 (en) * 2016-10-05 2018-04-12 Telefonaktiebolaget Lm Ericsson (Publ) Method and module for assigning task to server entity
CN109067890B (en) * 2018-08-20 2021-06-29 广东电网有限责任公司 CDN node edge computing system based on docker container
CN109688222B (en) * 2018-12-26 2020-12-25 深圳市网心科技有限公司 Shared computing resource scheduling method, shared computing system, server and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938790A (en) * 2012-11-20 2013-02-20 无锡城市云计算中心有限公司 Resource allocation method of cloud computing system
CN102917077A (en) * 2012-11-20 2013-02-06 无锡城市云计算中心有限公司 Resource allocation method in cloud computing system
CN106919445A (en) * 2015-12-28 2017-07-04 华为技术有限公司 A kind of method and apparatus of the container of Parallel Scheduling in the cluster
CN107239329A (en) * 2016-03-29 2017-10-10 西门子公司 Unified resource dispatching method and system under cloud environment
CN105791447A (en) * 2016-05-20 2016-07-20 北京邮电大学 Method and device for dispatching cloud resource orienting to video service
US20180034856A1 (en) * 2016-07-27 2018-02-01 International Business Machines Corporation Compliance configuration management
CN106371889A (en) * 2016-08-22 2017-02-01 浪潮(北京)电子信息产业有限公司 Method and device for realizing high-performance cluster system for scheduling mirror images
CN107819802A (en) * 2016-09-13 2018-03-20 华为软件技术有限公司 A kind of mirror image acquisition methods, node device and server in node cluster
CN107105029A (en) * 2017-04-18 2017-08-29 北京友普信息技术有限公司 A kind of CDN dynamic contents accelerated method and system based on Docker technologies
CN107566443A (en) * 2017-07-12 2018-01-09 郑州云海信息技术有限公司 A kind of distributed resource scheduling method
CN107733977A (en) * 2017-08-31 2018-02-23 北京百度网讯科技有限公司 A kind of cluster management method and device based on Docker
CN107844376A (en) * 2017-11-21 2018-03-27 北京星河星云信息技术有限公司 Resource allocation method, computing system, medium and the server of computing system
CN108563500A (en) * 2018-05-08 2018-09-21 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media
CN108628674A (en) * 2018-05-11 2018-10-09 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media
CN109062658A (en) * 2018-06-29 2018-12-21 优刻得科技股份有限公司 Realize dispatching method, device, medium, equipment and the system of computing resource serviceization

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020133967A1 (en) * 2018-12-26 2020-07-02 深圳市网心科技有限公司 Method for scheduling shared computing resources, shared computing system, server, and storage medium
CN110381159B (en) * 2019-07-26 2022-02-01 中国联合网络通信集团有限公司 Task processing method and system
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110661646A (en) * 2019-08-06 2020-01-07 上海孚典智能科技有限公司 Computing service management technology for high-availability Internet of things
CN110661646B (en) * 2019-08-06 2020-08-04 上海孚典智能科技有限公司 Computing service management technology for high-availability Internet of things
CN112394944A (en) * 2019-08-13 2021-02-23 阿里巴巴集团控股有限公司 Distributed development method, device, storage medium and computer equipment
CN110649958A (en) * 2019-09-05 2020-01-03 北京百度网讯科技有限公司 Method, apparatus, device and medium for processing data
CN110649958B (en) * 2019-09-05 2022-07-26 北京百度网讯科技有限公司 Method, apparatus, device and medium for processing satellite data
CN110677464A (en) * 2019-09-09 2020-01-10 深圳市网心科技有限公司 Edge node device, content distribution system, method, computer device, and medium
CN112702306B (en) * 2019-10-23 2023-05-09 ***通信有限公司研究院 Method, device, equipment and storage medium for intelligent service sharing
CN112702306A (en) * 2019-10-23 2021-04-23 ***通信有限公司研究院 Intelligent service sharing method, device, equipment and storage medium
CN111126895A (en) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 Management warehouse and scheduling method for scheduling intelligent analysis algorithm in complex scene
CN112068954A (en) * 2020-08-18 2020-12-11 弥伦工业产品设计(上海)有限公司 Method and system for scheduling network computing resources
CN112015521A (en) * 2020-09-30 2020-12-01 北京百度网讯科技有限公司 Configuration method and device of inference service, electronic equipment and storage medium
CN112738174A (en) * 2020-12-23 2021-04-30 中国人民解放军63921部队 Cross-region multi-task data transmission method and system for private network
CN112738174B (en) * 2020-12-23 2022-11-25 中国人民解放军63921部队 Cross-region multi-task data transmission method and system for private network

Also Published As

Publication number Publication date
WO2020133967A1 (en) 2020-07-02
CN109688222B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109688222A (en) The dispatching method of shared computing resource, shared computing system, server and storage medium
Téllez et al. A tabu search method for load balancing in fog computing
CN102945175A (en) Terminal software online upgrading system and method based on cloud computing environment
US20100131324A1 (en) Systems and methods for service level backup using re-cloud network
CN109491758A (en) Docker mirror image distribution method, system, data gateway and computer readable storage medium
Li et al. In a Telco-CDN, pushing content makes sense
CN111385112B (en) Slice resource deployment method, device, slice manager and computer storage medium
Faticanti et al. Throughput-aware partitioning and placement of applications in fog computing
CN105103506A (en) Network function virtualization method and device
CN111857873A (en) Method for realizing cloud native container network
CN113127192B (en) Method, system, device and medium for sharing same GPU by multiple services
CN104823175A (en) Cloud service managment system
CN108270818A (en) A kind of micro services architecture system and its access method
EP2353256A1 (en) Determination and management of virtual networks
US20130297703A1 (en) Peer node and method for improved peer node selection
CN110035306A (en) Dispositions method and device, the dispatching method and device of file
FR3069669A1 (en) A COMMUNICATION SYSTEM AND A METHOD FOR ACCESSING AND DEPLOYING EPHEMICAL MICROSERVICES ON A HETEROGENEOUS PLATFORM
US11178252B1 (en) System and method for intelligent distribution of integration artifacts and runtime requests across geographic regions
CN109962961A (en) A kind of reorientation method and system of content distribution network CDN service node
KR101033813B1 (en) Cloud computing network system and file distrubuting method of the same
Faticanti et al. Deployment of application microservices in multi-domain federated fog environments
CN105792247A (en) Data pushing method and device
Amoretti et al. Service migration within the cloud: Code mobility in SP2A
CN109309646A (en) A kind of multi-media transcoding method and system
CN108667920B (en) Service flow acceleration system and method for fog computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant