CN103188159B - The management method of hardware usefulness and high in the clouds arithmetic system - Google Patents
The management method of hardware usefulness and high in the clouds arithmetic system Download PDFInfo
- Publication number
- CN103188159B CN103188159B CN201110446425.1A CN201110446425A CN103188159B CN 103188159 B CN103188159 B CN 103188159B CN 201110446425 A CN201110446425 A CN 201110446425A CN 103188159 B CN103188159 B CN 103188159B
- Authority
- CN
- China
- Prior art keywords
- node
- bottleneck
- pond
- resource
- switching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Computer And Data Communications (AREA)
- Debugging And Monitoring (AREA)
Abstract
The present invention relates to management method and the high in the clouds arithmetic system of a kind of hardware usefulness.This high in the clouds arithmetic system includes multiple node apparatus and realizes the management node of described management method, and these node apparatus are configured at multiple node resource pond.The management method of hardware usefulness comprises the following steps.Detect the load in these node resource ponds, to judge bottleneck and the bottleneck pond corresponding to this bottleneck occurs.The node apparatus in other node resource ponds beyond bottleneck pond is assessed and selects at least one switching node.Change described switching node, to be distributed to bottleneck pond from original node resource pond by described switching node.
Description
Technical field
The present invention relates to the potency management technology of a kind of high in the clouds computing, particularly relate to the management method of a kind of hardware usefulness with
High in the clouds arithmetic system.
Background technology
High in the clouds computing (Cloud Computing) technology is to combine substantial amounts of servomechanism by world-wide web (Internet)
(or referred to as node), with the integrated computer forming high-speed computation with possess mass storage ability, it emphasizes have in local side resource
In the case of limit, network is utilized to obtain the calculation resources in a distant place, storage resources or service.High in the clouds computing can be by virtualization
And these nodes are carried out resource-sharing or the division of labor by the technology such as automatization, and grasped by the Ta such as network and browser
Make the webpage of these services, use and carry out various computing and work.
These numerous sets of node are formed for a servomechanism group of planes (server group).Owing to the quantity of these nodes is huge
Greatly, the most how at certain node resource generation bottleneck of a servomechanism group of planes and then when affecting the overall efficiency of system, servomechanism
A group of planes can automatically terminate the generation of bottleneck, uses the higher usefulness of offer, it has also become the weight of the most numerous high in the clouds arithmetic system
Want problem.
Summary of the invention
The present invention provides management method and the high in the clouds arithmetic system of a kind of hardware usefulness, and its various node resource ponds of detecting are
No generating bottle neck phenomenon, and automatically adjust and redistribute role's division of labor of these servomechanisms, just can the most automatically solve bottle
Neck phenomenon, provides the highest usefulness for high in the clouds arithmetic system.
The present invention proposes the management method of a kind of hardware usefulness, and it is applicable to a high in the clouds arithmetic system.The computing of described high in the clouds
System includes multiple node apparatus, and these node apparatus are configured at multiple node resource pond.Described management method includes following
Step.Detect the load in these node resource ponds, to judge a bottleneck and the bottleneck corresponding to this bottleneck occurs
Resource pool.Wherein these node resource ponds include described node resource pond.Other node resource ponds beyond bottleneck pond
These node apparatus in assess and select at least one switching node.Change described switching node, with by described switching node
Bottleneck pond is reassigned to from original node resource pond.
In one embodiment of this invention, above-mentioned management method also comprises the following steps.For each node resource
Pond sets a normal critical value and a bottleneck marginal value respectively.When the load of the one in these node resource ponds is just being less than correspondence
Often during marginal value, represent that the one in these node resource ponds is positioned at normal phenomenon.Load when the one in these node resource ponds
During higher than corresponding bottleneck marginal value, represent that the one in these node resource ponds occurs described bottleneck and becomes described bottle
Neck resource pool.
In one embodiment of this invention, assess described switching node to comprise the following steps.According to described bottleneck marginal value
To estimate that described switching node distributes after bottleneck pond from original node resource pond, bearing of original node resource pond
Carry and the load in bottleneck pond should be each less than the bottleneck marginal value of its correspondence.
In one embodiment of this invention, change described switching node to comprise the following steps.Retrieve a node associated data
Storehouse also obtains the node related data of described switching node.Adjust the node related data of described switching node, so that described turn
Change node to revise to bottleneck pond from original node resource pond.Described switching node is isolated from the arithmetic system of high in the clouds.Depend on
Described switching node is adjusted according to described bottleneck pond.And, described switching node is rejoined high in the clouds arithmetic system.
In one embodiment of this invention, isolate described switching node to comprise the following steps.By in described switching node
Multiple virtual machines move to other node apparatus in original node resource pond from described switching node.And, close described
Multiple service routines performed by switching node.
In one embodiment of this invention, isolate described switching node also to comprise the following steps.Set node associated data
Storehouse is to completely cut off the node related data of described switching node.
In one embodiment of this invention, the load in described node resource pond includes these respective computings in node resource pond
Load, space load and/or its combination.
In one embodiment of this invention, described node resource pond includes Service Source pond, calculates resource pool, storage resources
Pond and/or its combination.
For another viewpoint, the present invention proposes a kind of high in the clouds arithmetic system, and it includes multiple node apparatus and a pipe
Reason node.These node apparatus are mutually coupled and are configured at multiple node resource pond by a network.Management node passes through network
It is coupled to above-mentioned node apparatus, uses the load detecting these node resource ponds, it is judged that bottleneck and this bottleneck occurs
Corresponding bottleneck pond, wherein these node resource ponds include node resource pond.Management node is beyond bottleneck pond
Other node resource ponds node apparatus in assess and select at least one switching node, and change described switching node with
Described switching node is reassigned to bottleneck pond from original node resource pond.
Remaining implementation detail of this high in the clouds arithmetic system refer to described above, is not added with at this repeating.
Based on above-mentioned, the high in the clouds arithmetic system of the embodiment of the present invention sets different bearing respectively for each node resource pond
Carry the upper limit, and detect the function situation in each node resource pond.When specific node resource pond generating bottle neck phenomenon and superfluous without other
When remaining node is available for supporting, high in the clouds arithmetic system can select from normal operation and the node resource pond not having generating bottle neck phenomenon
Part of nodes, and put in above-mentioned specific node resource pond (in other words, it is simply that the role redistributing part of nodes divides
Work), use the generation reducing bottleneck.Therefore, divide the work by automatically adjusting and redistribute the role of these servomechanisms, cloud
End arithmetic system just can the most automatically solve bottleneck and promote its hardware operational effectiveness, it is provided that higher usefulness.
For the features described above of the present invention and advantage can be become apparent, special embodiment below, and coordinate institute's accompanying drawing to make
Describe in detail as follows.
Accompanying drawing explanation
Fig. 1 is the schematic diagram that high in the clouds arithmetic system is described according to one embodiment of the invention.
Fig. 2 is the management method flow chart that hardware usefulness is described according to one embodiment of the invention.
Fig. 3 is another schematic diagram that high in the clouds arithmetic system is described according to one embodiment of the invention.
Main element symbol description
100: high in the clouds arithmetic system
110: Service Source pond
112_1~112_i: service node
120: calculate resource pool
122_1~122_j: calculate node
130: storage resources pond
132_1~132_k: storage node
140: switch
150: bottleneck monitoring module
160: node selects module
170: data access module
180: node isolation module
190: node deployment module
195: node increases module
300: dotted arrow
DB: node linked database
S210~S290: step
Detailed description of the invention
With detailed reference to the one exemplary embodiment of the present invention, the example of described one exemplary embodiment is described in the accompanying drawings.
It addition, in place of all possibilities, figure and embodiment use the element/same or like portion of component/symbology of identical label
Point.
Fig. 1 is the schematic diagram that high in the clouds arithmetic system 100 is described according to one embodiment of the invention, such as, the embodiment of the present invention
Provide infrastructures i.e. service (Infrastructure as a Service;Referred to as IaaS) cabinet-type (Container)
Data center (Data Center) is using as high in the clouds arithmetic system 100.As it is shown in figure 1, the high in the clouds arithmetic system of the present embodiment
100 can comprise at least one rack (container).For the rack described in the present embodiment, each rack includes many
Individual frame (RACK), each frame also has multiple slot, each slot can also include one to multiple stage servomechanism (or be referred to as
Node apparatus).It is similarly composed owing to each rack has, for convenience of description, in the present embodiment with a rack as an example.
Refer to Fig. 1, high in the clouds arithmetic system 100 includes multiple node apparatus, these node apparatus computing system beyond the clouds
When the high in the clouds operating system of system 100 carries out disposing (deploy), the most configured in multiple node resource pond.In other words, this
A little node apparatus can be categorized into three kinds of node types, that is, Service Source pond (service resource pool) 110, meter
Calculate resource pool (computing resource pool) 120 and storage resources pond (storage resource pool) 130,
And Service Source pond 110 can also divide in the thinnest portion according to its service function.Therefore, node resource pond includes Service Source
Pond 110, calculating resource pool 120, storage resources pond 130 and/or its combination.
In the present embodiment, Service Source pond 110 includes i node apparatus 112_1~112_i, calculates resource pool 120 and wraps
Including j node apparatus 122_1~122_j, storage resources pond 130 includes k node apparatus 132_1~132_k, and i, j, k are all
Nonnegative integer.Node apparatus 112_1~112_i, node apparatus 122_1~122_j and node apparatus 132_1~132_k in
Also can be called in the present embodiment is service node 112_1~112_i, calculating node 122_1~122_j and storage node
132_1~132_k.These node apparatus above-mentioned are all coupled to layer 2 switch 140, use and carry out phase mutual coupling by Local Area Network
Connect, carry out communication and information transmission.Application the present embodiment person also can couple these joints by other kinds of network mode
Point device, such as world-wide web, wireless network .. etc., do not repeat them here.
Service Source pond 110 and the service node being located therein can segment its kind, the most in fact according to service function
Body installs (physical installer) service, entity management (physical manager) service, daily record (LOG) process clothes
Business, virtual management (virtual manager) service, application interface (Application Programming
Interface, API) service, virtual resource provide (virtual resource provisioning) service, data base clothes
Business, storage management (storage manager) service, load balance (load balance) service and security mechanism
(security) service ... etc..Calculating resource pool 120 and the calculating node being located therein service in order to provide to calculate.Store money
Pond, source 130 and the storage node that is located therein are then in order to provide store-service.
In other words, service node 112_1~112_i mainly provides many virtual machine (Virtual Machine;Letter
Be referred to as VM) service give user, these virtual machines be implemented in calculate node 122_1~122_j formed calculating money
Pond, source 120, the storage resources pond 130 that its required storage area of virtual machine is then made up of storage node 132_1~132_k
Thered is provided.Each service node 112_1~112_i can provide the clothes that user is different according to its performed different software
Business.Comparatively speaking, it is configured in calculating node 122_1~122_j calculated in resource pool 120 or storage resources pond 130
Storage node 132_1~132_k performs similar software program respectively so that it is be prone to be formed integral with one another and carry out huge computing or
It is to store data.
High in the clouds arithmetic system also includes a management node, and it is in order to monitor and to adjust the load feelings of each node apparatus
Condition.Above-mentioned management node can be one of them in above-mentioned node apparatus, or independent of another beyond node apparatus
Supervising device, the present embodiment using position the node apparatus 112_2 in Service Source pond 110 as management node.Management node
112_2 includes that bottleneck monitoring module 150, node select module 160, data access module 170, node isolation module 180, node
Disposing module 190 and node increases module 195, these functional module group will be in following detailed description.Additionally, high in the clouds operating system
When carrying out the deployment of each node apparatus, the node related data (node corresponding to each node apparatus will be obtained
Related data), these node related datas can be integrated and be stored in node linked database DB by high in the clouds operating system, with
Reference is carried out for management node 112_2.In the present embodiment, node linked database DB is arranged in service node 112_1, but
The present invention is not limited to this, and node linked database DB also can be positioned in any node device by other embodiments.
Although part node apparatus be specifically applied to high in the clouds arithmetic system 100 specific role (such as, be exclusively used in specific
Node resource pond), but also have part kind node apparatus can support multiple high in the clouds resource, be not only restricted to play the part of high in the clouds
The specific role of arithmetic system 100.Such as, the operation efficiency of part of nodes device is much better than other node apparatus, but its for
The ability providing service or storage data is the farthest inferior to other nodes, and now these node apparatus just can be referred to calculate specially
Using as calculating node in resource pool 120.But, many existing good operation efficiencies of node apparatus, it is also possible to provide preferably
Service and data store, therefore can as service node, storage node or calculate node be used.It is to say, this
Plant node apparatus not to be limited to be only capable of the specific role as high in the clouds arithmetic system 100 because of its hardware designs.Although at cloud
When end operating system configures, this compatibility preferably node apparatus has been determined and has belonged to specific node resource pond, but
Being under specific circumstances, management module 112_2 also changes the role of these node apparatus.
So-called " bottleneck (Bottleneck) ", is i.e. the effect in each node resource pond of high in the clouds arithmetic system 100
Can load or space overload, and the when of not having other standby (spare) node to be available for supporting, now can claim
For high in the clouds arithmetic system 100 already at bottleneck.Such as, in whole calculating resource pool 120 each calculating node 122_1~
The when that the average service rate of the central processing unit (CPU) of 122_j being too high, now it is known as usefulness bottleneck (Performance
Bottleneck).The most such as, in storage resources pond 130 storage area remained by storage node 122_1~122_k fast otherwise
The when of foot, now it is known as space bottleneck (Space Bottleneck).
In this, when having detected high in the clouds arithmetic system 100 generating bottle neck phenomenon, whole rack does not has again unnecessary standby joint
The when of point, the high in the clouds arithmetic system 100 of the embodiment of the present invention can select in the node resource pond of never generating bottle neck phenomenon
Some node apparatus is to carry out role's change so that these selected node apparatus are just using the node as generating bottle neck phenomenon
Wherein a member of resource pool, uses the bottleneck eliminating whole high in the clouds arithmetic system 100.Certainly, the embodiment of the present invention is necessary
It is contemplated whether these node apparatus changed can undertake the usefulness in the node resource pond after switching.
Hereinafter the management method of hardware usefulness i.e. it is illustrated by its high in the clouds arithmetic system 100 being suitable for.Fig. 2 is
The management method flow chart of hardware usefulness is described according to one embodiment of the invention.Please also refer to Fig. 1 and Fig. 2, in step S210
In, the bottleneck monitoring module 150 in management node 112_2 detects the load in above-mentioned node resource pond 110~130, and in step
In S220, bottleneck monitoring module 150 judges whether generating bottle neck phenomenon, and judges the node corresponding to this bottleneck
Resource pool.Node resource pond corresponding to generating bottle neck phenomenon is known as bottleneck pond herein.
In the present embodiment, management node 112_2 is just setting one respectively for each node resource pond 110~130
Often marginal value and a bottleneck marginal value, use the current function situation judging each node resource pond.Specifically, this enforcement
The bottleneck monitoring module 150 of example detects the load of each node apparatus, and the load of each node apparatus described includes these nodes
The respective computing load of device and space load (storage area the most), and come by node linked database DB
Unite the whole average load calculating each node resource pond 110~130.Thereby, the load in node resource pond includes these nodes
The average calculating operation load of respective node apparatus in resource pool, space load and/or its combination.
In the present embodiment, the normal critical value of usefulness bottleneck is set as 70%, and the bottleneck marginal value of usefulness bottleneck is then
It is set as 80%.It is to say, work as the average potency utilization rate of CPU in node resource pond to be less than 70%, and when certain dress node dress
Putting after carrying out role's change, in some joint resource pool original after change, the average potency utilization rate of CPU still needs to less than 70%.Separately
On the one hand, when in node resource pond, the average potency utilization rate of CPU is more than 80%, and when certain dress node apparatus carries out role's change
After, the bottleneck pond after change just should can meet its assessment less than 80%.
In the present embodiment, the normal critical value of space bottleneck is set as 80%, and the bottleneck marginal value of space bottleneck then sets
It is set to 90%.It is to say, when the numerical value (having used space/all storage areas) in node resource pond is less than 80%, and work as
After certain dress node apparatus carries out role's change, some joint resource pool original after change (use space/all storages empty
Between) numerical value still need to less than 80%.When being more than when the numerical value (having used space/all storage areas) in node resource pond
80%, and after certain dress node apparatus carries out role's change, the bottleneck pond after change (has used space/all storages
Space) numerical value should less than 80% just can meet its assessment.
After calculating the load in each node resource pond, when the load in node resource pond is less than the normal critical value of its correspondence
Time, bottleneck monitoring module 150 just can determine whether that these node resource ponds are positioned at normal phenomenon, the not feelings of generating bottle neck phenomenon
Condition.When the load in node resource pond is higher than corresponding normal critical value but is less than corresponding bottleneck marginal value when, bottleneck
Monitoring module 150 just can determine whether that this node resource pond is in high load condition, but not yet reaches above-mentioned " bottleneck ".So
And, when the load in node resource pond has reached or is higher than the bottleneck marginal value of its correspondence when, represent this node resource pond
Respective load already close to full load condition, bottleneck monitoring module 150 just can determine whether that this node resource pond occurs that " bottleneck is existing
As "." bottleneck " e.g., calculates usefulness underloading in resource pool 120 with load operand now, or stores
In resource pool 130 can storage area already below default spare space.
For convenience of explanation, present embodiment assumes that the problem that storage resources pond 130 has occurred space bottleneck.Therefore, bottle is worked as
Neck monitoring module 150 judges generating bottle neck phenomenon, and judges to occur the node resource pond corresponding to this bottleneck (also
Be exactly storage resources pond 130) after, just by step S220 enter step S230, node select module 160 from bottleneck pond with
In the node apparatus in other outer node resource ponds, assess and select at least one switching node.In other words, node selects mould
Group 160 meeting does not has the node resource pond of generating bottle neck phenomenon (e.g. Service Source pond 110 or calculating resource pool from other
120) select in as the node apparatus of storage node, and these node apparatus to be assessed after carrying out role's change, be
No can positively make the high in the clouds arithmetic system 100 will not generating bottle neck phenomenon.
In order to avoid two kinds of node apparatus already at the node data pond of high load condition can be carried out continuously
Counterchange roles, the node selection module 160 of the present embodiment will be from being positioned at the node resource pond of normal phenomenon (it is, it is born
Carry the node data pond less than normal critical value) the middle node apparatus selecting role to be changed, high load condition will not be positioned at
Node data pond in select.Additionally, node selects module 160 to be also required to the conversion above-mentioned according to the estimation of bottleneck marginal value
Whether node after carrying out role's change (it is, after original node resource pond distribution to bottleneck pond) can make
The bottleneck marginal value of the load in original node data pond and bottleneck pond smaller than its correspondence, with it is contemplated that carry out role
High in the clouds arithmetic system 100 is can reach all without effect of bottleneck after conversion.The node selection module 160 of the present embodiment can intuition
Ground calculates the average potency of position central processing unit of each node apparatus in same node resource pond and loads, or intuitively
Calculate whether storage area exceeds, to judge that it has reached or the most beyond bottleneck marginal value.
For example, it is assumed that node selects module 160 to select to calculate node in the calculating resource pool 120 be positioned at normal phenomenon
122_2 is as switching node, and calculating node 122_2 as storage node, and can calculate node 122_2 and select mould at node
If can reaching the effect above after group 160 assessment, node selects module 160 that these node apparatus are just considered as conversion joint
Point, uses the following step that continues.
In step S240, management node 112_2 changes above-mentioned switching node 122_2 in node linked database DB
Node related data, uses and is again divided from original node resource pond (namely calculating resource pool 120) by switching node 122_2
It is assigned in bottleneck pond (namely storage resources pond 130).Wherein, the motion flow of step S240 can be subdivided into step
S250 is to step S290, and by the node relevent information form in node linked database DB in detail these steps is described in detail one by one.
The high in the clouds operating system of high in the clouds arithmetic system 100 is when configuring node apparatus, just by the node of each node apparatus
Relevent information interpretation of records is in node relevent information form.Above-mentioned node relevent information can be obtained by following several ways.
Such as, when the basic input output system (BIOS) of each node apparatus is when being powered up self-inspection (POST) program, can be dynamically
Ground obtains its node related data (such as, central processing unit, memorizer, hard disk, network card ... wait related data), and passes through
Such as MAC Address in SMBIOS data structure (type 0,1,2 and OEM type) and network card EEPROM obtains other
Node related data (such as, the product data of node apparatus, BIOS information, node type), finally by these data via
IPMI OEM instruction is sent in the baseboard management controller (BMC) of each node apparatus.Additionally, BMC can also dynamically take
Obtain such as BMC network card relevent information, such as BMC network card MAC Address, IP address and frequency range thereof, use its node phase substantial
Close information.It is used as node linked database DB and the citing of node relevent information therein following with table (1).Wherein,
5 node relevent informations that table (1) is recorded the most sequentially take from the service node 112_1 of Fig. 1,112_i, calculating node 122_
1,122_2 and storage node 132_1.
Table (1)
Table (1) includes 10 fields, records the MAC of baseboard management controller (BMC) network interface card in each node apparatus respectively
Address, the IP address of BMC network interface card and frequency range, the MAC Address of system network interface card, the IP address of system network interface card and frequency range, processor money
News (model/arithmetic speed), memorizer information, hard disk information, node location, node type and servomechanism type.System network
The acquirement of the IP address of card is then by network startup (Network Boot).
For the node relevent information of node apparatus 112_1, the MAC Address of its BMC network interface card be " 00:A0:D1:EC:
F8:B1 ", the IP address of assigned BMC network interface card is " 10.1.0.1 ", and the frequency range of BMC network interface card is 100Mbps (bps=
bits per second).And the MAC Address of the system network interface card of node apparatus 112_1 is " 00:A0:D1:EA:34:E1 ", IP ground
Location is 1000Mbps for " 10.1.0.11 " and frequency range.It addition, the model of the CPU of node apparatus 112_1 is
" Intel (R) Xeon (R) CPU E5540 ", its operation frequency is " 2530MHz ".And node apparatus 112_1 includes 4 memorizeies
Module, DIMM1~DIMM4, the capacity of each memory module is all 8G.Additionally, the bracket of the hard disk of node apparatus 112_1
(carrier) numbered 1, hard disk type is SAS (Serial Attached SCSI, SCSI=Small Computer
System Interface), hard-disk capacity be 1TB, the rotation speed of hard disk be 7200RPM (Revolution Per Minute) and
Hard disk cache (cache) capacity is 16MB.
Returning to Fig. 2 and with reference to Fig. 1, in step S250, data access module 170 retrieves node linked database DB
And obtain the node related data of switching node 122_2.It is, data access module 170 can be from node linked database DB
The node related data of middle acquirement such as following table (2).
Table (2)
In step S260, data access module 170 adjusts the node dependency number of switching node 122_2 in above-mentioned table (2)
According to, and these node related datas are restored to the node linked database DB in service node 112_1, the present embodiment is amendment
In table (2), field " node type " and " servomechanism type " are adjusted to " storage node " with it by original " calculating node "
(as shown in following table (3)), makes switching node 122_2 from the calculating node 122_2 of original node resource pond (calculating resource pool 120)
Revise the storage node to bottleneck pond (storage resources pond 130).In other embodiments, data access module 170 also can
In now setting node linked database DB and completely cutting off the node related data of switching node 112_2, so that other node apparatus
Switching node 112_2 cannot be accessed.
Table (3)
Then, in step S270, switching node 122_2 is entered from high in the clouds arithmetic system 100 by node isolation module 180
Row isolation.Specifically, node isolation module 180 can perform many flow processs so that switching node 122_2 is from high in the clouds arithmetic system
Isolate in 100.Such as, node isolates module 180 by the multiple virtual machines (VM) being currently running in switching node 122_2 from turning
Change node and move to calculate other node apparatus 122_1~122_j of resource pool 120.And, shifting above-mentioned virtual machine
After device, node isolation module 180 closes all service routines performed on switching node 122_2.
In step S280, node deployment module 190 adjusts conversion joint according to bottleneck pond (storage resources pond 130)
Point 122_2.Node deployment module 190 can redeploy this conversion joint according to the node type/servomechanism type after adjusting
Point 122_2, say, that this switching node 122_2 is installed the operation system needed for bottleneck pond (storage resources pond 130)
System, and after above-mentioned operating system installation, then the service software bag that all storage nodes must have is installed
(service packages), so that switching node 122_2 meets the demand in storage resources pond 130.
Finally, as it is shown on figure 3, Fig. 3 is another signal that high in the clouds arithmetic system 100 is described according to one embodiment of the invention
Figure, and referring concurrently to Fig. 2, in step S290, node increases module 195 will rejoin high in the clouds by switching node 122_2
In arithmetic system 100, and be converted to bottleneck money by the calculating node 122_2 of original node resource pond (calculating resource pool 120)
Storage node 132_x (as shown in dotted arrow 300) in pond, source (storage resources pond 130).Data access module 170 also can be in
Now set node linked database DB and reopen original switching node 112_2 (the namely storage node 132_x of Fig. 3)
Node related data so that other node apparatus are accessed storage node 132_x.
In sum, the high in the clouds arithmetic system of the embodiment of the present invention sets different bearing respectively for each node resource pond
Carry the upper limit, and detect the function situation in each node resource pond.When specific node resource pond generating bottle neck phenomenon and superfluous without other
When remaining node is available for supporting, high in the clouds arithmetic system can select from normal operation and the node resource pond not having generating bottle neck phenomenon
Part of nodes, and put in above-mentioned specific node resource pond (in other words, it is simply that the role redistributing part of nodes divides
Work), use the generation reducing bottleneck.Therefore, divide the work by automatically adjusting and redistribute the role of these servomechanisms, cloud
End arithmetic system just can the most automatically solve bottleneck and promote its hardware operational effectiveness, it is provided that higher usefulness.
Although the present invention discloses as above with embodiment, so it is not limited to the present invention, any art
Middle tool usually intellectual, without departing from the spirit and scope of the present invention, when making a little change and retouching, therefore the present invention
Protection domain when being defined in the range of standard depending on claim.
Claims (9)
1. the management method of a hardware usefulness, it is adaptable to a high in the clouds arithmetic system, this high in the clouds arithmetic system includes multiple node
Device, and those node apparatus are configured at multiple node resource pond, it is characterised in that this management method includes:
Detect the load in those node resource ponds, to judge a bottleneck and the bottleneck money corresponding to this bottleneck occurs
Pond, source, wherein those node resource ponds include this node resource pond;
Those node apparatus in other node resource ponds beyond this bottleneck pond are assessed and selects at least one conversion joint
Point;And
Change this at least one switching node, so that this at least one switching node is reassigned to this bottle from original node resource pond
Neck resource pool,
Wherein change this at least one switching node to comprise the following steps:
Retrieve a node linked database and obtain the node related data of this at least one switching node;
Adjust the node related data of this at least one switching node, so that this at least one switching node is from original node resource pond
Amendment is to this bottleneck pond;
This at least one switching node is isolated from this high in the clouds arithmetic system;
This at least one switching node is adjusted according to this bottleneck pond;And
This at least one switching node is rejoined this high in the clouds arithmetic system.
Management method the most according to claim 1, also includes:
A normal critical value and a bottleneck marginal value is set respectively for those node resource ponds each;
When the load of the one in those node resource ponds is less than this corresponding normal critical value, the one in those node resource ponds
It is positioned at a normal phenomenon;And
When the load of the one in those node resource ponds is higher than this corresponding bottleneck marginal value, the one in those node resource ponds
There is this bottleneck and become this bottleneck pond.
Management method the most according to claim 2, assesses this at least one switching node and comprises the following steps:
According to this bottleneck marginal value to estimate that this at least one switching node distributes to this bottleneck from original node resource pond
After pond, the load in original node resource pond and the load in this bottleneck pond are each critical less than this bottleneck of its correspondence
Value.
Management method the most according to claim 3, isolates this at least one switching node and comprises the following steps:
Multiple virtual machines in this at least one switching node are moved to original node resource from this at least one switching node
Other node apparatus in pond;And
Close the multiple service routines performed by this at least one switching node.
Management method the most according to claim 4, isolates this at least one switching node and also comprises the following steps:
Set this node linked database to completely cut off the node related data of this at least one switching node.
Management method the most according to claim 1, wherein the load in those node resource ponds includes those node resource ponds
A respective computing load, a space load and/or its combination.
Management method the most according to claim 1, wherein those node resource ponds include a Service Source pond, a calculating money
Yuan Chi, a storage resources pond and/or its combination.
8. a high in the clouds arithmetic system, it is characterised in that including:
Multiple node apparatus, those node apparatus are mutually coupled and are configured at multiple node resource pond by a network;And
One management node, is coupled to those node apparatus by this, detects the load in those node resource ponds, to judge a bottleneck
Phenomenon and the bottleneck pond corresponding to this bottleneck occurs, wherein those node resource ponds include this node resource pond,
Those node apparatus in this management node other node resource ponds beyond this bottleneck pond are assessed and selects at least one
Switching node, changes this at least one switching node to be reassigned to from original node resource pond by this at least one switching node
This bottleneck pond, this management node further includes:
One node isolation module, isolates this at least one switching node from this high in the clouds arithmetic system.
High in the clouds the most according to claim 8 arithmetic system, wherein this management node includes:
One bottleneck monitoring module, sets a normal critical value and a bottleneck marginal value respectively for those node resource ponds each,
When the load of the one in those node resource ponds is less than this corresponding normal critical value, it is judged that the one in those node resource ponds
It is positioned at a normal phenomenon, and when the load of the one in those node resource ponds is higher than this corresponding bottleneck marginal value, it is judged that should
There is this bottleneck and become this bottleneck pond in the one in a little node resource ponds;
One node selects module, according to this bottleneck marginal value to assess this at least one switching node, and wherein this at least one conversion joint
Point is after original node resource pond distribution to this bottleneck pond, and the load in original node resource pond and this bottleneck provide
The load in pond, source is each less than this bottleneck marginal value of its correspondence;
One data access module, retrieves a node linked database to obtain the node related data of this at least one switching node,
And adjust the node related data of this at least one switching node, so that this at least one switching node is repaiied from original node resource pond
It is changed to this bottleneck pond;
One node deployment module, adjusts this at least one switching node according to this bottleneck pond;And
One node increases module, and this at least one switching node is rejoined this high in the clouds arithmetic system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110446425.1A CN103188159B (en) | 2011-12-28 | 2011-12-28 | The management method of hardware usefulness and high in the clouds arithmetic system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110446425.1A CN103188159B (en) | 2011-12-28 | 2011-12-28 | The management method of hardware usefulness and high in the clouds arithmetic system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103188159A CN103188159A (en) | 2013-07-03 |
CN103188159B true CN103188159B (en) | 2016-08-10 |
Family
ID=48679130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110446425.1A Active CN103188159B (en) | 2011-12-28 | 2011-12-28 | The management method of hardware usefulness and high in the clouds arithmetic system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103188159B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104618413B (en) * | 2013-11-05 | 2018-09-11 | 英业达科技有限公司 | High in the clouds device configuration method |
CN107316190A (en) * | 2016-04-26 | 2017-11-03 | 阿里巴巴集团控股有限公司 | A kind of processing method and processing device of Internet resources transfer service |
CN107277193B (en) * | 2017-08-09 | 2020-05-15 | 苏州浪潮智能科技有限公司 | Method, device and system for managing address of baseboard management controller |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582850A (en) * | 2009-06-19 | 2009-11-18 | 优万科技(北京)有限公司 | Method and system for realizing load balance |
CN102244685A (en) * | 2011-08-11 | 2011-11-16 | 中国科学院软件研究所 | Distributed type dynamic cache expanding method and system supporting load balancing |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100250746A1 (en) * | 2009-03-30 | 2010-09-30 | Hitachi, Ltd. | Information technology source migration |
US8874744B2 (en) * | 2010-02-03 | 2014-10-28 | Vmware, Inc. | System and method for automatically optimizing capacity between server clusters |
-
2011
- 2011-12-28 CN CN201110446425.1A patent/CN103188159B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582850A (en) * | 2009-06-19 | 2009-11-18 | 优万科技(北京)有限公司 | Method and system for realizing load balance |
CN102244685A (en) * | 2011-08-11 | 2011-11-16 | 中国科学院软件研究所 | Distributed type dynamic cache expanding method and system supporting load balancing |
Also Published As
Publication number | Publication date |
---|---|
CN103188159A (en) | 2013-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9954758B2 (en) | Virtual network function resource allocation and management system | |
US9684542B2 (en) | Smart cloud workload balancer | |
US8930543B2 (en) | Dynamically building a set of compute nodes to host the user's workload | |
Dasgupta et al. | Workload management for power efficiency in virtualized data centers | |
CN101601014B (en) | Methods and systems for load balancing of virtual machines in clustered processors using storage related load information | |
CN104601664B (en) | A kind of control system of cloud computing platform resource management and scheduling virtual machine | |
Hsu et al. | Smoothoperator: Reducing power fragmentation and improving power utilization in large-scale datacenters | |
US6925421B2 (en) | Method, system, and computer program product for estimating the number of consumers that place a load on an individual resource in a pool of physically distributed resources | |
CN103425511A (en) | System and method of installing and deploying application software in cloud computing environment | |
US20160380921A1 (en) | Mechanism of identifying available memory resources in a network of multi-level memory modules | |
US20130024573A1 (en) | Scalable and efficient management of virtual appliance in a cloud | |
EP4029197B1 (en) | Utilizing network analytics for service provisioning | |
US20190278483A1 (en) | Implementing hierarchical availability domain aware replication policies | |
Pham et al. | Applying Ant Colony System algorithm in multi-objective resource allocation for virtual services | |
WO2021034668A1 (en) | Optimizing clustered applications in a clustered infrastructure | |
US11995061B2 (en) | Techniques and architectures for partition mapping in a multi-node computing environment | |
CN104754008A (en) | Network storage node, network storage system and device and method for network storage node | |
CN103188159B (en) | The management method of hardware usefulness and high in the clouds arithmetic system | |
CN110990154A (en) | Big data application optimization method and device and storage medium | |
CN105487928B (en) | A kind of control method, device and Hadoop system | |
TW201327205A (en) | Managing method for hardware performance and cloud computing system | |
Wu et al. | Heterogeneous virtual machine consolidation using an improved grouping genetic algorithm | |
Li et al. | Multi-algorithm collaboration scheduling strategy for docker container | |
Ke et al. | DisaggRec: Architecting Disaggregated Systems for Large-Scale Personalized Recommendation | |
Saravanakumar et al. | An Efficient Technique for Virtual Machine Clustering and Communications Using Task‐Based Scheduling in Cloud Computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |