US20230224205A1 - Hardware resource management for management appliances running on a shared cluster of hosts - Google Patents
Hardware resource management for management appliances running on a shared cluster of hosts Download PDFInfo
- Publication number
- US20230224205A1 US20230224205A1 US17/691,153 US202217691153A US2023224205A1 US 20230224205 A1 US20230224205 A1 US 20230224205A1 US 202217691153 A US202217691153 A US 202217691153A US 2023224205 A1 US2023224205 A1 US 2023224205A1
- Authority
- US
- United States
- Prior art keywords
- cluster
- resource pool
- hardware resources
- sddc
- management
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000007726 management method Methods 0.000 description 69
- 230000000694 effects Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000004044 response Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000006855 networking Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005067 remediation Methods 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/046—Network management architectures or arrangements comprising network management agents or mobile agents therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
Definitions
- One or more embodiments also ensure that management appliances, which include the cloud gateway appliance and VIM server appliances that locally manage the SDDCs, have sufficient hardware resources to perform their management operations.
- a method of reserving hardware resources for the management appliances of an SDDC that have been deployed onto one or more hosts of a cluster of hosts includes reserving hardware resources of the cluster for a resource pool that has been created for the management appliances, the hardware resources including at least processor resources of the hosts and memory resources of the hosts, and assigning the management appliances to the resource pool created for the management appliances.
- the management appliances share the hardware resources of the cluster with one or more other resource pools and, after the steps of reserving and assigning, are allocated at least the hardware resources that have been reserved for the resource pool created for the management appliances.
- FIG. 1 depicts a cloud control plane implemented in a public cloud, and a plurality of SDDCs that are managed through the cloud control plane, according to embodiments.
- FIG. 3 A is a flow diagram that depicts the steps of setting up a management resource pool for a management cluster and assigning management appliances to the management resource pool.
- FIG. 3 B is schematic diagram of resource pools that have been set up for the management cluster.
- FIG. 4 is a sample desired state document in JSON format.
- FIG. 5 depicts a sequence of steps carried out to apply the desired state of a management cluster.
- FIG. 7 A is a conceptual diagram illustrating the process of applying the desired state of the management cluster.
- FIG. 7 B is a conceptual diagram illustrating the process of determining drift of the running state of the management cluster from its desired state.
- a cloud control plane is employed in the embodiments for centrally managing SDDCs, which may be of different types and which may be deployed across different geographical regions.
- the cloud control plane relies on agents running locally in the SDDCs to establish cloud inbound connections with the cloud control plane, retrieve from the cloud control plane various tasks to perform on the SDDCs, and delegate the tasks to a local control plane.
- a user interface (UI) or an application programming interface (API) that interacts with cloud control plane 110 is depicted in FIG. 1 as UI/API 101 .
- UI/API 101 an administrator of SDDCs 20 can issue commands to apply a desired state to SDDCs 20 or to upgrade the VIM server appliance in SDDCs 20 .
- Cloud control plane 110 represents a group of services running in virtual infrastructure of public cloud 10 that interact with each other to provide a control plane through which the administrator of SDDCs 20 can manage SDDCs 20 by issuing commands through UI/API 101 .
- API gateway 111 is also a service running in the virtual infrastructure of public cloud 10 and this service is responsible for routing cloud inbound connections to the proper service in cloud control plane 110 , e.g., SDDC configuration/upgrade interface endpoint service 120 , notification service 170 , or coordinator 150 .
- SDDC configuration/upgrade interface endpoint service 120 is responsible for accepting commands made through UI/API 101 and returning the result to UI/API 101 .
- An operation requested in the commands can be either synchronous or asynchronous.
- Asynchronous operations are stored in activity service 130 , which keeps track of the progress of the operation, and an activity ID, which can be used to poll for the result of the operation, is returned to UI/API 101 .
- the operation targets multiple SDDCs 20 (e.g., an operation to apply the desired state to SDDCs 20 or an operation to upgrade the VIM server appliance in SDDCs 20 )
- SDDC configuration/upgrade interface endpoint service 120 creates an activity which has children activities.
- SDDC configuration/upgrade worker service 140 processes these children activities independently and respectively for multiple SDDCs 20
- activity service 130 tracks these children activities according to results returned by SDDC configuration/upgrade worker service 140 .
- SDDC configuration/upgrade worker service 140 polls activity service 130 for new operations and processes them by passing the tasks to be executed to SDDC task dispatcher service 141 .
- SDDC configuration/upgrade worker service 140 then polls SDDC task dispatcher service 141 for results and notifies activity service 130 of the results.
- SDDC configuration/upgrade worker service 140 also polls SDDC event dispatcher service 142 for events posted to SDDC event dispatcher service 142 and handles these events based on the event type.
- SDDC task dispatcher service 141 dispatches each task passed thereto by SDDC configuration/upgrade worker service 140 , to coordinator 150 and tracks the progress of the task by polling coordinator 150 .
- Coordinator 150 accepts cloud inbound connections, which are routed through API gateway 111 , from SDDC upgrade agents 220 .
- SDDC upgrade agents 220 are responsible for establishing cloud inbound connections with coordinator 150 to acquire tasks dispatched to coordinator 150 for execution in their respective SDDCs 20 , and orchestrating the execution of these tasks.
- SDDC upgrade agents 220 return results to coordinator 150 through the cloud inbound connections.
- SDDC upgrade agents 220 also notify coordinator 150 of various events through the cloud inbound connections, and coordinator 150 in turn posts these events to SDDC event dispatcher service 142 for handling by SDDC configuration/upgrade worker service 140 .
- SDDC profile manager service 160 is responsible for storing the desired state documents in data store 165 (e.g., a virtual disk or a depot accessible using a URL) and, for each of SDDCs 20 , tracks the history of the desired state document associated therewith and any changes from its desired state specified in the desired state document, e.g., using a relational database.
- data store 165 e.g., a virtual disk or a depot accessible using a URL
- An operation requested in the commands made through UI/API 101 may be synchronous, instead of asynchronous.
- An operation is synchronous if there is a specific time window within which the operation must be completed. Examples of a synchronous operation include an operation to get the desired state of an SDDC or an operation to get SDDCs that are associated with a particular desired state.
- SDDC configuration/upgrade interface endpoint service 120 has direct access to data store 165 .
- a plurality of SDDCs 20 which may be of different types and which may be deployed across different geographical regions, is managed through cloud control plane 110 .
- one of SDDCs 20 is deployed in a private data center of the customer and another one of SDDCs 20 is deployed in a public cloud, and all of SDDCs are located in different geographical regions so that they would not be subject to the same natural disasters, such as hurricanes, fires, and earthquakes.
- any of the services of described above may be a microservice that is implemented as a container image executed on the virtual infrastructure of public cloud 10 .
- each of the services described above is implemented as one or more container images running within a Kubernetes® pod.
- a gateway appliance 210 and VIM server appliance 230 are provisioned from the virtual resources of SDDC 20 .
- gateway appliance 210 and VIM server appliance 230 are each a VM instantiated in one or more of the hosts of the same cluster that is managed by VIM server appliance 230 .
- Virtual disk 211 is provisioned for gateway appliance 210 and storage blocks of virtual disk 211 map to storage blocks allocated to virtual disk file 281 .
- virtual disk 231 is provisioned for VIM server appliance 230 and storage blocks of virtual disk 231 map to storage blocks allocated to virtual disk file 282 .
- Virtual disk files 281 and 282 are stored in shared storage 280 .
- Shared storage 280 is managed by VIM server appliance 230 as storage for the cluster and may be a physical storage device, e.g., storage array, or a virtual storage area network (VSAN) device, which is provisioned from physical storage devices of the hosts in the cluster.
- VSAN virtual storage area network
- Gateway appliance 210 functions as a communication bridge between cloud control plane 110 and VIM server appliance 230 .
- SDDC configuration agent 219 running in gateway appliance 210 communicates with coordinator 150 to retrieve SDDC configuration tasks (e.g., apply desired state) that were dispatched to coordinator 150 for execution in SDDC 20 and delegates the tasks to SDDC configuration service 234 running in VIM server appliance 230 .
- SDDC upgrade agent 220 running in gateway appliance 210 communicates with coordinator 150 to retrieve upgrade tasks (e.g., task to upgrade the VIM server appliance) that were dispatched to coordinator 150 for execution in SDDC 20 and delegates the tasks to LCM 261 running in VIM server appliance 230 . After the execution of these tasks have completed, SDDC configuration agent 219 or SDDC upgrade agent 220 sends back the execution result to coordinator 150 .
- Services 260 include LCM 261 , which is responsible for managing the lifecycle of VIM server appliance 230 , e.g., an upgrade of VIM server appliance 230 .
- DRS Distributed resource scheduler
- HA High availability
- VIM service is a VIM service that is responsible for restarting HA-designated virtual machines that are running on failed hosts of the cluster on other hosts of the cluster.
- VI profile service 264 is a VIM service that is responsible for applying the desired configuration of the virtual infrastructure managed by VIM server appliance 230 (e.g., the number of clusters, the hosts that each cluster would manage, etc.) and the desired configuration of various features provided by other VIM services running in VIM server appliance 230 (e.g., DRS 262 and HA 263 ), as well as retrieving the running configuration of the virtual infrastructure managed by VIM server appliance 230 and the running configuration of various features provided by the other VIM services running in VIM server appliance 230 .
- Configuration and database files 272 for services 260 running in VIM server appliance 230 are stored in virtual disk 231 .
- FIG. 2 is a schematic illustration of a plurality of clusters (cluster0, cluster1, ..., clusterN) that are managed by VIM server appliance 230 .
- Each cluster has physical resources allocated to it.
- the physical resources include a plurality of host computers, storage devices, and networking devices.
- physical resources are depicted in solid lines and virtual resources provisioned from the physical resources are depicted in dashed lines.
- cluster0 includes physical hosts 301 , 303 , and VSAN device 305 which is provisioned from physical storage devices of hosts 301 , 303 .
- management network 311 and data network 312 of cluster0 are virtual networks provisioned from physical networking devices (e.g., network interface controllers in hosts 301 , 303 , switches, and routers).
- the other clusters, cluster1 ... cluster N also include physical hosts and virtual SAN devices and virtual networks provisioned from physical resources.
- the hosts of cluster0 include a host 301 on which gateway appliance 210 and VIM server appliance 230 are deployed, and a plurality of workload VM hosts 303 on which workload VMs are deployed.
- gateway appliance 210 and VIM server appliance 230 are more generally referred to as “management appliances.”
- Another example of a management appliance is a server appliance that is responsible for managing virtual networks.
- these management appliances are deployed on hosts of cluster0, and hereinafter cluster0 is more generally referred to as a management cluster.
- DRS 262 running in VIM server appliance 230 manages the sharing of hardware resources of each cluster (including the management cluster) according to one or more resource pools.
- a single resource pool is defined for a cluster, the total capacity of that cluster (e.g., GHz for CPU, GB for memory, GB for storage) is shared by all of the virtual resources (e.g., VMs and VSAN device) provisioned for that cluster.
- the virtual resources e.g., VMs and VSAN device
- child resource pools are defined under the root resource pool of a cluster, DRS 262 manages sharing of the physical resources of the cluster by the different child resource pools.
- physical resources may be reserved for one or more virtual machines. In such a case, DRS 262 manages sharing of the physical resources allocated to that resource pool, by the virtual machines and any child resource pools.
- Instruction 351 is issued to create the management resource pool for the management appliances in the management cluster.
- the management cluster has the name, Cluster-0, and the management resource pool for the management appliances has the name, Management-ResourcePool.
- Instruction 352 is issued to reserve hardware resources for the management resource pool.
- the hardware resource reservations for the management resource pool are depicted generally as: “memory_allocation”: ⁇ ⁇ , “cpu_allocation”: ⁇ ⁇ , and “storage_allocation”: ⁇ ⁇ .
- Instruction 353 is issued to assign the management appliances to the management resource pool. In the sample desired state document shown in FIG.
- the actual amount of hardware resources that is reserved for the management appliances is equal to at least the amount of hardware resources required by the gateway appliance and the amount of hardware resources required by the VIM server appliance. In some embodiments, the actual amount of hardware resources that is reserved for the management appliances is equal to at least the amount of hardware resources required by the gateway appliance and two times the amount of hardware resources required by the VIM server appliance, so that sufficient hardware resources can be ensured for a migration-based upgrade of the VIM server appliance, which requires an instantiation of a second VIM server appliance.
- the sample desired state document shown in FIG. 4 also includes settings of DRS 262 .
- the desired settings of DRS 262 specified in the desired state document is applied by a DRS plug-in of VI profile service 264 , which provides the service for acquiring the running state of DRS 262 (in response to a GET command from SDDC configuration service 234 ) and for applying the desired state of DRS 262 specified in the desired state document (in response to a SET command from SDDC configuration service 234 ).
- FIG. 5 depicts a sequence of steps carried out by cloud control plane 110 , SDDC configuration agent 219 , and the local control plane to apply the desired state, e.g., desired state of a management cluster, to an SDDC.
- the steps of FIG. 5 are triggered by a direct command entered by the administrator through UI/API 101 or an API call made through UI/API 101 , which specifies the SDDC on which the apply operation is performed (hereinafter referred to as “target SDDC”) and the desired state document to apply (step 510 ).
- target SDDC the SDDC on which the apply operation is performed
- SDDC configuration/upgrade interface endpoint service 120 stores the operation requested in the direct command or API call in activity service 130 along with the desired state document SDDC configuration/upgrade interface endpoint service 120 retrieves from data store 165 .
- This operation and the desired state document are passed onto SDDC task dispatcher service 141 by SDDC configuration worker service 140 , and SDDC task dispatcher service 141 in turn dispatches the task to perform the apply operation on the target SDDC along with the desired state document, to coordinator 150 (step 520 ).
- SDDC configuration agent 219 running in the target SDDC retrieves the dispatched task and the desired state document from coordinator 150 and delegates the task to SDDC configuration service 234 . Then, SDDC configuration service 234 at step 524 instructs the service plug-ins (e.g., DRS service plug-in) to set its associated software products/services to the desired state specified in the desired state document, and at step 526 stores the desired state document in data store 226 .
- the service plug-ins e.g., DRS service plug-in
- SDDC configuration service 234 at step 530 notifies SDDC configuration agent 219 that the desired state has been applied, and SDDC configuration agent 219 at step 532 notifies cloud control plane 110 that the desired state has been applied to the target SDDC. Then, at step 534 , notification service 170 provides notification through UI/API 101 that the desired state has been applied.
- FIG. 6 depicts a sequence of steps carried out by cloud control plane 110 , SDDC configuration agent 219 , and the local control plane to perform a compliance check on an SDDC against the desired state.
- the steps of FIG. 6 are triggered by a direct command entered by the administrator through UI/API 101 or an API call made through UI/API 101 , which specifies the SDDC on which the compliance check is performed (hereinafter referred to as “target SDDC”) (step 610 ).
- target SDDC In response to the direct command or API call, SDDC configuration/upgrade interface endpoint service 120 stores the operation requested in the direct command or API call in activity service 130 . This operation is passed onto SDDC task dispatcher service 141 by SDDC configuration worker service 140 , and SDDC task dispatcher service 141 in turn dispatches the task to perform the compliance check on the target SDDC, to coordinator 150 (step 620 ).
- SDDC configuration agent 219 running in the target SDDC retrieves the dispatched task from coordinator 150 and delegates the task to SDDC configuration service 234 .
- SDDC configuration service 234 instructs the service plug-ins (e.g., DRS plug-in) to get the running state from its associated software products/services, and at step 626 retrieves the desired state document of the target SDDC stored in data store 226 and compares the running state against the desired state specified in the desired state document.
- service plug-ins e.g., DRS plug-in
- SDDC configuration service 234 detects drift of the running state from the desired state (step 628 , Yes)
- SDDC configuration service 234 at step 630 notifies SDDC configuration agent 219 of the drift.
- SDDC configuration agent 219 sends a notification of the drift event to cloud control plane 110 .
- notification service 170 provides notification through UI/API 101 that the running state is non-compliant with the desired state.
- SDDC configuration service 234 does not detect any drift of the running state from the desired state (step 628 , No)
- SDDC configuration service 234 at step 640 notifies SDDC configuration agent 219 that there is no drift.
- SDDC configuration agent 219 notifies cloud control plane 110 that the target SDDC is compliant with the desired state
- notification service 170 provides notification through UI/API 101 that the running state is compliant with the desired state.
- the compliance check described above is carried out on a periodic basis by each of the SDDCs and cloud control plane 110 is notified of any drift in the SDDCs, and the apply operation of FIG. 5 is carried out automatically in response to any such drift notification to remediate the running state of the SDDCs to conform to the desired state.
- FIG. 7 A is a conceptual diagram illustrating the process of applying the desired state of the management cluster according to embodiments. This process begins at step S 1 where SDDC configuration agent 219 retrieves the task to apply the desired state along with the desired state document from cloud control plane 111 . Then, at step S 2 , SDDC configuration agent 219 delegates the task to apply the desired state to SDDC configuration service 234 . In response, SDDC configuration service 234 stores the desired state document (depicted in FIG. 7 A as desired state document 273 ), and instructs service plug-ins 268 to set DRS 262 to the DRS settings specified in desired state document 273 , set up the resource pools specified in desired state document 273 , and to assign management appliances to the management resource pool (step S 3 ).
- desired state document depictted in FIG. 7 A as desired state document 273
- service plug-ins 268 instructs service plug-ins 268 to set DRS 262 to the DRS settings specified in desired state document 273 , set up the resource pools specified in desired state document
- FIG. 7 B is a conceptual diagram illustrating the process of determining drift of running state of the management cluster from the desired state of the management cluster according to embodiments. This process may be initiated from cloud control plane 110 as part of a compliance check described above in conjunction with FIG. 6 or may be carried out periodically by SDDC configuration service 234 .
- SDDC configuration service 234 instructs DRS service plug-in 268 to get the running state of the management cluster, compares the running state against the desired state specified in desired state document 273 . If, as a result of the comparison, SDDC configuration service 234 detects drift of the running state from the desired state, SDDC configuration service 234 at step S 5 notifies SDDC configuration agent 219 of the drift. Then, SDDC configuration agent 219 sends a notification of the drift event to cloud control plane 110 (Step S 6 ).
- the embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where the quantities or representations of the quantities can be stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments may be useful machine operations.
- One or more embodiments of the invention also relate to a device or an apparatus for performing these operations.
- the apparatus may be specially constructed for required purposes, or the apparatus may be a general-purpose computer selectively activated or configured by a computer program stored in the computer.
- Various general-purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
- One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in computer readable media.
- the term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system.
- Computer readable media may be based on any existing or subsequently developed technology that embodies computer programs in a manner that enables a computer to read the programs. Examples of computer readable media are hard drives, NAS systems, read-only memory (ROM), RAM, compact disks (CDs), digital versatile disks (DVDs), magnetic tapes, and other optical and non-optical data storage devices.
- a computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
- Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments, or as embodiments that blur distinctions between the two.
- various virtualization operations may be wholly or partially implemented in hardware.
- a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
- the virtualization software can therefore include components of a host, console, or guest OS that perform virtualization functions.
Abstract
Description
- Benefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign Application Serial No. 202241002060 filed in India entitled “HARDWARE RESOURCE MANAGEMENT FOR MANAGEMENT APPLIANCES RUNNING ON A SHARED CLUSTER OF HOSTS”, on Jan. 13, 2022, by VMware, Inc., which is herein incorporated in its entirety by reference for all purposes.
- In a software-defined data center (SDDC), virtual infrastructure, which includes virtual machines (VMs) and virtualized storage and networking resources, is provisioned from hardware infrastructure that includes a plurality of host computers (hereinafter also referred to simply as “hosts”), storage devices, and networking devices. The provisioning of the virtual infrastructure is carried out by management software, referred to herein as virtual infrastructure management (VIM) software, that communicates with virtualization software (e.g., hypervisor) installed in the host computers.
- VIM server appliances, such as VMware vCenter® server appliance, include such VIM software and are widely used to provision SDDCs across multiple clusters of hosts, where each cluster is a group of hosts that are managed together by the VIM software to provide cluster-level functions, such as load balancing across the cluster by performing VM migration between the hosts, distributed power management, dynamic VM placement according to affinity and anti-affinity rules, and high availability (HA). The VIM software also manages a shared storage device to provision storage resources for the cluster from the shared storage device.
- For customers who have multiple SDDCs deployed across different geographical regions, and deployed in a hybrid manner, e.g., on-premise, in a public cloud, or as a service, the process of managing VIM server appliances across many different locations has proven to be difficult. These customers are looking for an easier way to monitor their SDDCs for compliance with the company policies and manage the upgrade and remediation of such SDDCs..
- One or more embodiments provide cloud services for centrally managing the SDDCs. These cloud services rely on agents running in a cloud gateway appliance that is located in a customer environment, to deliver the cloud services to the customer environment in which the SDDCs are deployed.
- One or more embodiments also ensure that management appliances, which include the cloud gateway appliance and VIM server appliances that locally manage the SDDCs, have sufficient hardware resources to perform their management operations. A method of reserving hardware resources for the management appliances of an SDDC that have been deployed onto one or more hosts of a cluster of hosts, according to an embodiment includes reserving hardware resources of the cluster for a resource pool that has been created for the management appliances, the hardware resources including at least processor resources of the hosts and memory resources of the hosts, and assigning the management appliances to the resource pool created for the management appliances. The management appliances share the hardware resources of the cluster with one or more other resource pools and, after the steps of reserving and assigning, are allocated at least the hardware resources that have been reserved for the resource pool created for the management appliances.
- Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above method, as well as a computer system configured to carry out the above method.
-
FIG. 1 depicts a cloud control plane implemented in a public cloud, and a plurality of SDDCs that are managed through the cloud control plane, according to embodiments. -
FIG. 2 is a schematic illustration of a plurality of clusters that are managed by a VIM server appliance. -
FIG. 3A is a flow diagram that depicts the steps of setting up a management resource pool for a management cluster and assigning management appliances to the management resource pool. -
FIG. 3B is schematic diagram of resource pools that have been set up for the management cluster. -
FIG. 4 is a sample desired state document in JSON format. -
FIG. 5 depicts a sequence of steps carried out to apply the desired state of a management cluster. -
FIG. 6 depicts a sequence of steps carried out to perform a compliance check on the management cluster against the desired state. -
FIG. 7A is a conceptual diagram illustrating the process of applying the desired state of the management cluster. -
FIG. 7B is a conceptual diagram illustrating the process of determining drift of the running state of the management cluster from its desired state. - A cloud control plane is employed in the embodiments for centrally managing SDDCs, which may be of different types and which may be deployed across different geographical regions. The cloud control plane relies on agents running locally in the SDDCs to establish cloud inbound connections with the cloud control plane, retrieve from the cloud control plane various tasks to perform on the SDDCs, and delegate the tasks to a local control plane.
- Embodiments further ensure that management appliances, which include the VIM server appliance and a cloud gateway appliance on which the agents are deployed, have sufficient hardware resources to perform their management operations by setting up a resource pool for the management appliances and reserving the hardware resources needed by the management appliances. In addition, the state of a shared cluster of hosts on which the management appliances are deployed is managed according to a desired state, which is stored in a human readable and editable file, e.g., a JSON (JavaScript Object Notation) file. The desired state specifies an amount of hardware resources to be reserved for different resource pools of the shared cluster, including the resource pool for the management appliances. If the running state of the shared cluster drifts from its desired state, remediation is performed on the shared cluster by applying the desired state to the shared cluster.
-
FIG. 1 depicts acloud control plane 110 implemented in apublic cloud 10, and a plurality ofSDDCs 20 that are managed throughcloud control plane 110. In the embodiment illustrated herein,cloud control plane 110 is accessible by multiple tenants through UI/API 101 and each of the different tenants manage a group of SDDCs throughcloud control plane 110. In the following description, a group of SDDCs of one particular tenant is depicted asSDDCs 20, and to simplify the description, the operation ofcloud control plane 110 will be described with respect to management ofSDDCs 20. However, it should be understood that the SDDCs of other tenants have the same appliances, software products, and services running therein asSDDCs 20, and are managed throughcloud control plane 110 in the same manner as described below forSDDCs 20. - A user interface (UI) or an application programming interface (API) that interacts with
cloud control plane 110 is depicted inFIG. 1 as UI/API 101. Through UI/API 101, an administrator ofSDDCs 20 can issue commands to apply a desired state toSDDCs 20 or to upgrade the VIM server appliance inSDDCs 20. -
Cloud control plane 110 represents a group of services running in virtual infrastructure ofpublic cloud 10 that interact with each other to provide a control plane through which the administrator ofSDDCs 20 can manageSDDCs 20 by issuing commands through UI/API 101.API gateway 111 is also a service running in the virtual infrastructure ofpublic cloud 10 and this service is responsible for routing cloud inbound connections to the proper service incloud control plane 110, e.g., SDDC configuration/upgrade interface endpoint service 120,notification service 170, or coordinator 150. - SDDC configuration/upgrade interface endpoint service 120 is responsible for accepting commands made through UI/
API 101 and returning the result to UI/API 101. An operation requested in the commands can be either synchronous or asynchronous. Asynchronous operations are stored inactivity service 130, which keeps track of the progress of the operation, and an activity ID, which can be used to poll for the result of the operation, is returned to UI/API 101. If the operation targets multiple SDDCs 20 (e.g., an operation to apply the desired state toSDDCs 20 or an operation to upgrade the VIM server appliance in SDDCs 20), SDDC configuration/upgrade interface endpoint service 120 creates an activity which has children activities. SDDC configuration/upgrade worker service 140 processes these children activities independently and respectively formultiple SDDCs 20, andactivity service 130 tracks these children activities according to results returned by SDDC configuration/upgrade worker service 140. - SDDC configuration/upgrade worker service 140
polls activity service 130 for new operations and processes them by passing the tasks to be executed to SDDCtask dispatcher service 141. SDDC configuration/upgrade worker service 140 then polls SDDCtask dispatcher service 141 for results and notifiesactivity service 130 of the results. SDDC configuration/upgrade worker service 140 also polls SDDCevent dispatcher service 142 for events posted to SDDCevent dispatcher service 142 and handles these events based on the event type. - SDDC
task dispatcher service 141 dispatches each task passed thereto by SDDC configuration/upgrade worker service 140, to coordinator 150 and tracks the progress of the task by polling coordinator 150. Coordinator 150 accepts cloud inbound connections, which are routed throughAPI gateway 111, from SDDCupgrade agents 220. SDDCupgrade agents 220 are responsible for establishing cloud inbound connections with coordinator 150 to acquire tasks dispatched to coordinator 150 for execution in theirrespective SDDCs 20, and orchestrating the execution of these tasks. Upon completion of the tasks,SDDC upgrade agents 220 return results to coordinator 150 through the cloud inbound connections. SDDCupgrade agents 220 also notify coordinator 150 of various events through the cloud inbound connections, and coordinator 150 in turn posts these events to SDDCevent dispatcher service 142 for handling by SDDC configuration/upgrade worker service 140. - SDDC profile manager service 160 is responsible for storing the desired state documents in data store 165 (e.g., a virtual disk or a depot accessible using a URL) and, for each of
SDDCs 20, tracks the history of the desired state document associated therewith and any changes from its desired state specified in the desired state document, e.g., using a relational database. - An operation requested in the commands made through UI/API 101 may be synchronous, instead of asynchronous. An operation is synchronous if there is a specific time window within which the operation must be completed. Examples of a synchronous operation include an operation to get the desired state of an SDDC or an operation to get SDDCs that are associated with a particular desired state. In the embodiments, to enable such operations to be completed within the specific time window, SDDC configuration/upgrade interface endpoint service 120 has direct access to
data store 165. - As described above, a plurality of
SDDCs 20, which may be of different types and which may be deployed across different geographical regions, is managed throughcloud control plane 110. In one example, one of SDDCs 20 is deployed in a private data center of the customer and another one ofSDDCs 20 is deployed in a public cloud, and all of SDDCs are located in different geographical regions so that they would not be subject to the same natural disasters, such as hurricanes, fires, and earthquakes. - Any of the services of described above (and below) may be a microservice that is implemented as a container image executed on the virtual infrastructure of
public cloud 10. In one embodiment, each of the services described above is implemented as one or more container images running within a Kubernetes® pod. - In each
SDDC 20, regardless of its type and location, agateway appliance 210 andVIM server appliance 230 are provisioned from the virtual resources ofSDDC 20. In one embodiment,gateway appliance 210 andVIM server appliance 230 are each a VM instantiated in one or more of the hosts of the same cluster that is managed byVIM server appliance 230.Virtual disk 211 is provisioned forgateway appliance 210 and storage blocks ofvirtual disk 211 map to storage blocks allocated tovirtual disk file 281. Similarly,virtual disk 231 is provisioned forVIM server appliance 230 and storage blocks ofvirtual disk 231 map to storage blocks allocated tovirtual disk file 282. Virtual disk files 281 and 282 are stored in sharedstorage 280. Sharedstorage 280 is managed byVIM server appliance 230 as storage for the cluster and may be a physical storage device, e.g., storage array, or a virtual storage area network (VSAN) device, which is provisioned from physical storage devices of the hosts in the cluster. -
Gateway appliance 210 functions as a communication bridge betweencloud control plane 110 andVIM server appliance 230. In particular,SDDC configuration agent 219 running ingateway appliance 210 communicates with coordinator 150 to retrieve SDDC configuration tasks (e.g., apply desired state) that were dispatched to coordinator 150 for execution inSDDC 20 and delegates the tasks toSDDC configuration service 234 running inVIM server appliance 230. In addition,SDDC upgrade agent 220 running ingateway appliance 210 communicates with coordinator 150 to retrieve upgrade tasks (e.g., task to upgrade the VIM server appliance) that were dispatched to coordinator 150 for execution inSDDC 20 and delegates the tasks toLCM 261 running inVIM server appliance 230. After the execution of these tasks have completed,SDDC configuration agent 219 orSDDC upgrade agent 220 sends back the execution result to coordinator 150. - Various services running in
VIM server appliance 230, including VIM services for managing the SDDC, are depicted asservices 260.Services 260 includeLCM 261, which is responsible for managing the lifecycle ofVIM server appliance 230, e.g., an upgrade ofVIM server appliance 230. Distributed resource scheduler (DRS) 262 is a VIM service that is responsible for setting up resource pools and load balancing of workloads (e.g., VMs) across the resource pools. High availability (HA) 263 is a VIM service that is responsible for restarting HA-designated virtual machines that are running on failed hosts of the cluster on other hosts of the cluster.VI profile service 264 is a VIM service that is responsible for applying the desired configuration of the virtual infrastructure managed by VIM server appliance 230 (e.g., the number of clusters, the hosts that each cluster would manage, etc.) and the desired configuration of various features provided by other VIM services running in VIM server appliance 230 (e.g.,DRS 262 and HA 263), as well as retrieving the running configuration of the virtual infrastructure managed byVIM server appliance 230 and the running configuration of various features provided by the other VIM services running inVIM server appliance 230. Configuration and database files 272 forservices 260 running inVIM server appliance 230 are stored invirtual disk 231. -
FIG. 2 is a schematic illustration of a plurality of clusters (cluster0, cluster1, ..., clusterN) that are managed byVIM server appliance 230. Each cluster has physical resources allocated to it. The physical resources include a plurality of host computers, storage devices, and networking devices. InFIG. 2 , physical resources are depicted in solid lines and virtual resources provisioned from the physical resources are depicted in dashed lines. In particular, cluster0 includesphysical hosts VSAN device 305 which is provisioned from physical storage devices ofhosts management network 311 anddata network 312 of cluster0 are virtual networks provisioned from physical networking devices (e.g., network interface controllers inhosts FIG. 2 , the hosts of cluster0 include ahost 301 on whichgateway appliance 210 andVIM server appliance 230 are deployed, and a plurality of workload VM hosts 303 on which workload VMs are deployed. - Hereinafter,
gateway appliance 210 andVIM server appliance 230 are more generally referred to as “management appliances.” Another example of a management appliance is a server appliance that is responsible for managing virtual networks. In the embodiments illustrated herein, these management appliances are deployed on hosts of cluster0, and hereinafter cluster0 is more generally referred to as a management cluster. - In the embodiments,
DRS 262 running inVIM server appliance 230 manages the sharing of hardware resources of each cluster (including the management cluster) according to one or more resource pools. When a single resource pool is defined for a cluster, the total capacity of that cluster (e.g., GHz for CPU, GB for memory, GB for storage) is shared by all of the virtual resources (e.g., VMs and VSAN device) provisioned for that cluster. If child resource pools are defined under the root resource pool of a cluster,DRS 262 manages sharing of the physical resources of the cluster by the different child resource pools. In addition, within a particular resource pool, physical resources may be reserved for one or more virtual machines. In such a case,DRS 262 manages sharing of the physical resources allocated to that resource pool, by the virtual machines and any child resource pools. -
FIG. 3A is a flow diagram that depicts the steps of setting up a management resource pool for a management cluster and assigning management appliances to the management resource pool to ensure sufficient hardware resources are reserved for the management appliances. These steps are carried out bySDDC configuration service 234 according to a desired state document that specifies the desired state of the management cluster.FIG. 3B is schematic diagram of resource pools that have been set up for the management cluster.FIG. 4 is a sample desired state document in JSON format that is processed bySDDC configuration service 234 prior to issuing a series of instructions 351-354 to one or more service plug-ins ofVI profile service 264, which are responsible for applying the desired state of the management cluster specified in the desired state document. -
Instruction 351 is issued to create the management resource pool for the management appliances in the management cluster. In the sample desired state document shown inFIG. 4 , the management cluster has the name, Cluster-0, and the management resource pool for the management appliances has the name, Management-ResourcePool.Instruction 352 is issued to reserve hardware resources for the management resource pool. In the sample desired state document shown inFIG. 4 , the hardware resource reservations for the management resource pool are depicted generally as: “memory_allocation”:{ }, “cpu_allocation”: { }, and “storage_allocation”: { }.Instruction 353 is issued to assign the management appliances to the management resource pool. In the sample desired state document shown inFIG. 4 , the VMs with names, vCenterVM (corresponding to VIM server appliance 230) and gatewayVM (corresponding to gateway appliance 210), are assigned to the management resource pool. Instruction 354 is issued to reserve hardware resources for the individual management appliances. In the sample desired state document shown inFIG. 4 , the hardware resource reservations for the individual management appliances are depicted generally as: “memory_allocation”:{ }, “cpu_allocation”: { }, and “storage_allocation”: { }. - In addition to specifying the amount of the particular hardware resource that is reserved, the hardware resource allocations, “memory_allocation”: { }, “cpu_allocation”: { }, and “storage_allocation”: { }, may also specify an upper limit to the hardware resource that can be consumed by the resource pool or VM, and the priority given to the resource pool or VM when there is contention for the shared hardware resource by another resource pool or VM.
- In the embodiments, the actual amount of hardware resources that is reserved for the management appliances is equal to at least the amount of hardware resources required by the gateway appliance and the amount of hardware resources required by the VIM server appliance. In some embodiments, the actual amount of hardware resources that is reserved for the management appliances is equal to at least the amount of hardware resources required by the gateway appliance and two times the amount of hardware resources required by the VIM server appliance, so that sufficient hardware resources can be ensured for a migration-based upgrade of the VIM server appliance, which requires an instantiation of a second VIM server appliance.
- The schematic diagram of
FIG. 3B depicts the management cluster as the root resource pool (root RP). Three resource pools,management resource pool 361, workloadVM resource pool 362, and highavailability resource pool 363, are created as children resource pools of the root resource pool. The children resource pools share the hardware resources of the management cluster according to their hardware resource allocations. The schematic diagram ofFIG. 3B also depicts the VMs that are assigned to the different resource pools. The VMs assigned tomanagement resource pool 361 include the gateway appliance and the VIM server appliance. The spare resource that is reserved frommanagement resource pool 361 for the second VIM server appliance that will be needed for migration-based upgrade of the VIM server appliance, is depicted inFIG. 3B as an empty box. The VMs assigned to workloadVM resource pool 362 are workload VMs. - The sample desired state document shown in
FIG. 4 also includes settings ofDRS 262. The desired settings ofDRS 262 specified in the desired state document is applied by a DRS plug-in ofVI profile service 264, which provides the service for acquiring the running state of DRS 262 (in response to a GET command from SDDC configuration service 234) and for applying the desired state ofDRS 262 specified in the desired state document (in response to a SET command from SDDC configuration service 234). -
FIG. 5 depicts a sequence of steps carried out bycloud control plane 110,SDDC configuration agent 219, and the local control plane to apply the desired state, e.g., desired state of a management cluster, to an SDDC. The steps ofFIG. 5 are triggered by a direct command entered by the administrator through UI/API 101 or an API call made through UI/API 101, which specifies the SDDC on which the apply operation is performed (hereinafter referred to as “target SDDC”) and the desired state document to apply (step 510). In response to the direct command or API call, SDDC configuration/upgrade interface endpoint service 120 stores the operation requested in the direct command or API call inactivity service 130 along with the desired state document SDDC configuration/upgrade interface endpoint service 120 retrieves fromdata store 165. This operation and the desired state document are passed onto SDDCtask dispatcher service 141 by SDDC configuration worker service 140, and SDDCtask dispatcher service 141 in turn dispatches the task to perform the apply operation on the target SDDC along with the desired state document, to coordinator 150 (step 520). - At
step 522,SDDC configuration agent 219 running in the target SDDC retrieves the dispatched task and the desired state document from coordinator 150 and delegates the task toSDDC configuration service 234. Then,SDDC configuration service 234 atstep 524 instructs the service plug-ins (e.g., DRS service plug-in) to set its associated software products/services to the desired state specified in the desired state document, and atstep 526 stores the desired state document in data store 226. -
SDDC configuration service 234 atstep 530 notifiesSDDC configuration agent 219 that the desired state has been applied, andSDDC configuration agent 219 atstep 532 notifiescloud control plane 110 that the desired state has been applied to the target SDDC. Then, at step 534,notification service 170 provides notification through UI/API 101 that the desired state has been applied. -
FIG. 6 depicts a sequence of steps carried out bycloud control plane 110,SDDC configuration agent 219, and the local control plane to perform a compliance check on an SDDC against the desired state. The steps ofFIG. 6 are triggered by a direct command entered by the administrator through UI/API 101 or an API call made through UI/API 101, which specifies the SDDC on which the compliance check is performed (hereinafter referred to as “target SDDC”) (step 610). In response to the direct command or API call, SDDC configuration/upgrade interface endpoint service 120 stores the operation requested in the direct command or API call inactivity service 130. This operation is passed onto SDDCtask dispatcher service 141 by SDDC configuration worker service 140, and SDDCtask dispatcher service 141 in turn dispatches the task to perform the compliance check on the target SDDC, to coordinator 150 (step 620). - At
step 622,SDDC configuration agent 219 running in the target SDDC retrieves the dispatched task from coordinator 150 and delegates the task toSDDC configuration service 234. Then,SDDC configuration service 234 at step 624 instructs the service plug-ins (e.g., DRS plug-in) to get the running state from its associated software products/services, and atstep 626 retrieves the desired state document of the target SDDC stored in data store 226 and compares the running state against the desired state specified in the desired state document. - If, as a result of the comparison,
SDDC configuration service 234 detects drift of the running state from the desired state (step 628, Yes),SDDC configuration service 234 at step 630 notifiesSDDC configuration agent 219 of the drift. Then, atstep 632,SDDC configuration agent 219 sends a notification of the drift event tocloud control plane 110. Atstep 636,notification service 170 provides notification through UI/API 101 that the running state is non-compliant with the desired state. - If, as a result of the comparison,
SDDC configuration service 234 does not detect any drift of the running state from the desired state (step 628, No),SDDC configuration service 234 atstep 640 notifiesSDDC configuration agent 219 that there is no drift. Then, atstep 642,SDDC configuration agent 219 notifiescloud control plane 110 that the target SDDC is compliant with the desired state, and atstep 644,notification service 170 provides notification through UI/API 101 that the running state is compliant with the desired state. - In some embodiments, the compliance check described above is carried out on a periodic basis by each of the SDDCs and
cloud control plane 110 is notified of any drift in the SDDCs, and the apply operation ofFIG. 5 is carried out automatically in response to any such drift notification to remediate the running state of the SDDCs to conform to the desired state. -
FIG. 7A is a conceptual diagram illustrating the process of applying the desired state of the management cluster according to embodiments. This process begins at step S1 whereSDDC configuration agent 219 retrieves the task to apply the desired state along with the desired state document fromcloud control plane 111. Then, at step S2,SDDC configuration agent 219 delegates the task to apply the desired state toSDDC configuration service 234. In response,SDDC configuration service 234 stores the desired state document (depicted inFIG. 7A as desired state document 273), and instructs service plug-ins 268 to setDRS 262 to the DRS settings specified in desiredstate document 273, set up the resource pools specified in desiredstate document 273, and to assign management appliances to the management resource pool (step S3). -
FIG. 7B is a conceptual diagram illustrating the process of determining drift of running state of the management cluster from the desired state of the management cluster according to embodiments. This process may be initiated fromcloud control plane 110 as part of a compliance check described above in conjunction withFIG. 6 or may be carried out periodically bySDDC configuration service 234. At step S4,SDDC configuration service 234 instructs DRS service plug-in 268 to get the running state of the management cluster, compares the running state against the desired state specified in desiredstate document 273. If, as a result of the comparison,SDDC configuration service 234 detects drift of the running state from the desired state,SDDC configuration service 234 at step S5 notifiesSDDC configuration agent 219 of the drift. Then,SDDC configuration agent 219 sends a notification of the drift event to cloud control plane 110 (Step S6). - The embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where the quantities or representations of the quantities can be stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments may be useful machine operations.
- One or more embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for required purposes, or the apparatus may be a general-purpose computer selectively activated or configured by a computer program stored in the computer. Various general-purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
- The embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, etc.
- One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system. Computer readable media may be based on any existing or subsequently developed technology that embodies computer programs in a manner that enables a computer to read the programs. Examples of computer readable media are hard drives, NAS systems, read-only memory (ROM), RAM, compact disks (CDs), digital versatile disks (DVDs), magnetic tapes, and other optical and non-optical data storage devices. A computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
- Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, certain changes may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation unless explicitly stated in the claims.
- Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments, or as embodiments that blur distinctions between the two. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
- Many variations, additions, and improvements are possible, regardless of the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest OS that perform virtualization functions.
- Plural instances may be provided for components, operations, or structures described herein as a single instance. Boundaries between components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention. In general, structures and functionalities presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionalities presented as a single component may be implemented as separate components. These and other variations, additions, and improvements may fall within the scope of the appended claims.
Claims (20)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202241002060 | 2022-01-13 | ||
IN202241002060 | 2022-01-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
US11689411B1 US11689411B1 (en) | 2023-06-27 |
US20230224205A1 true US20230224205A1 (en) | 2023-07-13 |
Family
ID=86899035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/691,153 Active US11689411B1 (en) | 2022-01-13 | 2022-03-10 | Hardware resource management for management appliances running on a shared cluster of hosts |
Country Status (1)
Country | Link |
---|---|
US (1) | US11689411B1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100228839A1 (en) * | 2009-03-09 | 2010-09-09 | Oracle International Corporation | Efficient on-demand provisioning of servers for specific software sets |
US9378044B1 (en) * | 2015-03-28 | 2016-06-28 | Vmware, Inc. | Method and system that anticipates deleterious virtual-machine state changes within a virtualization layer |
-
2022
- 2022-03-10 US US17/691,153 patent/US11689411B1/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100228839A1 (en) * | 2009-03-09 | 2010-09-09 | Oracle International Corporation | Efficient on-demand provisioning of servers for specific software sets |
US9378044B1 (en) * | 2015-03-28 | 2016-06-28 | Vmware, Inc. | Method and system that anticipates deleterious virtual-machine state changes within a virtualization layer |
Also Published As
Publication number | Publication date |
---|---|
US11689411B1 (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10198281B2 (en) | Hybrid infrastructure provisioning framework tethering remote datacenters | |
US11321108B2 (en) | User interface for managing a distributed virtual switch | |
US10235209B2 (en) | Hybrid task framework | |
US10887283B2 (en) | Secure execution and tracking of workflows in a private data center by components in the cloud | |
US8301746B2 (en) | Method and system for abstracting non-functional requirements based deployment of virtual machines | |
US8862933B2 (en) | Apparatus, systems and methods for deployment and management of distributed computing systems and applications | |
US11023267B2 (en) | Composite virtual machine template for virtualized computing environment | |
US10135692B2 (en) | Host management across virtualization management servers | |
US20160380906A1 (en) | Hybrid cloud resource scheduling | |
US10579488B2 (en) | Auto-calculation of recovery plans for disaster recovery solutions | |
JP2013518330A5 (en) | ||
US10395195B2 (en) | Provisioning virtual machines to optimize application licensing costs | |
US20220035662A1 (en) | Scheduling workloads on a common set of resources by multiple schedulers operating independently | |
US20210216234A1 (en) | Automated tiering of file system objects in a computing system | |
US20230229478A1 (en) | On-boarding virtual infrastructure management server appliances to be managed from the cloud | |
US11722372B2 (en) | Desired state management of software-defined data center | |
US20230177067A1 (en) | Replication of inventory data across multiple software-defined data centers | |
US11900099B2 (en) | Reduced downtime during upgrade of an application hosted in a data center | |
US11689411B1 (en) | Hardware resource management for management appliances running on a shared cluster of hosts | |
US9798571B1 (en) | System and method for optimizing provisioning time by dynamically customizing a shared virtual machine | |
US20220413902A1 (en) | Partition migration with critical task prioritization | |
US10805232B2 (en) | Content driven public cloud resource partitioning and governance | |
US20240004686A1 (en) | Custom resource definition based configuration management | |
US20230336419A1 (en) | Desired state management of software-defined data center | |
US20230195584A1 (en) | Lifecycle management of virtual infrastructure management server appliance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VMWARE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GORAI, KRISHNENDU;RADEV, IVAYLO RADOSLAVOV;KODENKIRI, AKASH;AND OTHERS;SIGNING DATES FROM 20220113 TO 20220117;REEL/FRAME:059216/0898 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: VMWARE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067102/0395 Effective date: 20231121 |