US20130275966A1 - Providing application based monitoring and recovery for a hypervisor of an ha cluster - Google Patents

Providing application based monitoring and recovery for a hypervisor of an ha cluster Download PDF

Info

Publication number
US20130275966A1
US20130275966A1 US13/444,997 US201213444997A US2013275966A1 US 20130275966 A1 US20130275966 A1 US 20130275966A1 US 201213444997 A US201213444997 A US 201213444997A US 2013275966 A1 US2013275966 A1 US 2013275966A1
Authority
US
United States
Prior art keywords
guest
hypervisor
node
cluster
given
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/444,997
Other languages
English (en)
Inventor
Richard E. Harper
Marcel Mittelstaedt
Markus Mueller
Lisa F. Spainhower
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/444,997 priority Critical patent/US20130275966A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARPER, RICHARD E., MITTELSTAEDT, Marcel, MUELLER, MARKUS, SPAINHOWER, LISA F.
Priority to US13/589,390 priority patent/US9110867B2/en
Priority to GB1414770.6A priority patent/GB2513282A/en
Priority to CN201380018522.8A priority patent/CN104205060B/zh
Priority to DE112013002014.9T priority patent/DE112013002014B4/de
Priority to PCT/IB2013/052388 priority patent/WO2013153472A1/en
Publication of US20130275966A1 publication Critical patent/US20130275966A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2035Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant without idle spare hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1438Restarting or rejuvenating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • G06F11/1484Generic software techniques for error detection or fault masking by means of middleware or OS functionality involving virtual machines

Definitions

  • the invention disclosed and claimed herein generally pertains to a method and apparatus wherein a hypervisor is linked to one or more other hypervisors to form a high availability (HA) cluster. More particularly, the invention pertains to a method and apparatus of the above type wherein each hypervisor may enable multiple guest operating systems, or guest virtual machines (VMs), to run concurrently on a host computing platform.
  • HA high availability
  • Certain virtualization management products maintain the availability of guest VMs by including or embedding an HA cluster product in their product offerings.
  • these products work by forming the underlying hypervisors, which each runs on a physical machine, into a high availability cluster. Heartbeating is then performed between the hypervisors. When a member of the cluster fails the heartbeat, either due to hypervisor failure or physical server failure, embedded HA clustering technology restarts the guest VMs on alternate servers, thus maintaining their availability.
  • This approach has several limitations. For example, it does not detect and recover from the failure of the guest VM systems themselves, beyond a full crash of the guest's operating systems. Such approach only detects and recovers from the failure of the underlying hypervisor and its physical server. Neither does it detect and recover from the failure of applications running inside the guest VMs. Thus, applications can fail while running within a guest VM, without the hypervisor based cluster taking any notice. In this case, the guest is still up, but it doesn't give service. This places a significant limitation on the achievable availability of virtualized systems, since failures are frequently due to operating system problems, and application crashes and hangs. Moreover, more complex critical business applications require operations on the application level to take advantage of certain built-in data replication technology. Without any visibility into the guest VM, it is not possible to invoke these operations and take advantage of the built in features.
  • Embodiments of the invention can take the form of a method, a computer program product or an apparatus, selectively.
  • An embodiment directed to a method is associated with a first node comprising a hypervisor and one or more guest virtual machines (VMs), wherein each guest VM is disposed to run one or more applications, and the first node is joined with one or more other nodes to form a high availability (HA) cluster.
  • the method includes the step of establishing an internal bidirectional communication channel between each guest VM and the hypervisor of the first node.
  • the method further includes sending messages that include commands and responses to commands through the internal channel, between the hypervisor and a given guest VM, wherein respective commands are sent to manage a specified application running on the given guest VM.
  • the messages are selectively monitored, to detect an occurrence of a failure condition associated with the specified application running on the given guest VM. Responsive to detecting a failure condition, action is taken to correct the failure condition, wherein the action includes sending at least one command through the internal channel from the hypervisor to the given guest VM.
  • FIGS. 1A and 1B are block diagrams that each depicts an HA cluster of nodes, in which an illustrative embodiment of the invention is implemented;
  • FIG. 2 is a schematic view illustrating a node for the node cluster of FIG. 1A or FIG. 1B ;
  • FIG. 3 is a flow chart showing steps of a method comprising an embodiment of the invention.
  • FIG. 4 is a block diagram showing a computer or data processing system that may be used as one or more of the components of an embodiment of the invention.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a high availability (HA) computer cluster 100 which comprises multiple nodes exemplified by nodes 102 and 104 .
  • Nodes 102 and 104 are also referenced as node 1 and node N, respectively, where N is the total number of nodes.
  • N is two, but cluster 100 is not limited thereto.
  • Respective nodes are joined together to form the cluster by means of a bus 106 or the like.
  • Each of the nodes comprises a computer server system which is constructed or configured in accordance with an embodiment of the invention, as described hereinafter in connection with FIG. 2 . More particularly, each node includes a hypervisor and a hardware platform, for running applications and multiple guest operating systems, which are referred to herein as guest virtual machines (VMs).
  • VMs guest virtual machines
  • a cable 108 provided to interconnect each hypervisor of the nodes of cluster 100 , and to carry heartbeat pulses or messages therebetween. If an application is running on a given one of the nodes, and the other node detects an alteration or failure of heartbeats sent from the given node, the other node will recognize that a failure has occurred in the hypervisor, or in the physical server of the given node. The other node may then be started to implement failover, to run the application from a guest VM managed by the hypervisor of the other node.
  • Cluster 100 is a peer to peer arrangement, since the cluster is not provided with a central manager to direct or control failover between different nodes. Instead, failover is implemented by the nodes themselves.
  • nodes 102 and 104 joined together to form a cluster, by means of bus 106 .
  • the cluster of FIG. 1B is provided with an HA cluster manager 110 .
  • manager 110 When an application is running on a given one of the nodes, heartbeats sent from the given node are monitored by manager 110 . Upon detecting a failure indicated by the heartbeats, manager 110 could implement failover to run the application on the other node.
  • Node 200 has a computing platform 202 , which comprises a hardware component 204 a and an operating system 204 b that includes a hypervisor 206 .
  • Hardware component 204 a and host 204 b operate to provide guest operating systems, or virtual machines (VMs) 208 and 210 , which are managed by hypervisor 206 .
  • the guest VMs 208 and 210 are able to run respective applications 212 and 214 , and are provided with application availability managers 216 and 218 for controlling and monitoring such applications.
  • VMs virtual machines
  • FIG. 2 further shows a local HA cluster manager 220 , which manages hypervisor 206 , over a bidirectional communication path such as link 220 a .
  • HA cluster manager 220 is provided to implement failovers that involve node 200 . For example, if an application is running on a guest VM, and hypervisor 206 notifies cluster manager 220 of a detected failure, HA cluster manager 220 could direct hypervisor 206 to have the application run on a different guest VM of node 200 , or on a guest VM of a different node of the associated node cluster. By providing this capability, the node cluster is able to achieve high availability.
  • the respective operation of HA cluster manager 220 and hypervisor 206 , and the interaction therebetween to monitor and manage guest VMs 208 and 210 and applications running on the guest VMs, is described hereinafter in greater detail.
  • HA cluster manager 220 could be located adjacent to hypervisor 206 , or could be contained therein. IN this embodiment, each of these components would function or operate as described above.
  • HA cluster manager 220 comprises a component of the Tivoli System Automation Multi-Platform (TSA-MP) of the International Business Machines Corporation.
  • TSA-MP Tivoli System Automation Multi-Platform
  • FIG. 2 also shows HA cluster manager 220 connected to interact with local filesystem components 222 and 224 of computing platform 202 , through links 220 b and 220 c , respectively.
  • These filesystem components are used in data transfers with guest VMs 216 and 218 , respectively, as described hereinafter in further detail.
  • a VM channel, or internal channel 226 which extends between hypervisor 206 and an application availability manager 216 of guest VM 208 .
  • a similar internal channel 228 extends between hypervisor 206 and application availability manager 218 of guest VM 210 .
  • the internal channels 226 and 228 are each bidirectional, and are thus able to carry messages between hypervisor 206 and their respective guest VMs 208 and 210 .
  • FIG. 2 further shows channel 226 extended further to filesystem 222 , and channel 228 extended further to filesystem 224 .
  • the internal channels 226 and 228 may be implemented by using a Kernel-based Virtual Machine (KVM) hypervisor, although the invention is by no means limited thereto.
  • KVM Kernel-based Virtual Machine
  • Internal channel 226 comprises a pre-specified path for streaming data in two directions, between hypervisor 206 and application availability manager 208 of guest VM 208 .
  • the internal channel 226 includes a memory buffer of pre-specified capacity at each of its ends, to stream data or receive streamed data over the channel.
  • Read messages and write messages could be sent by using an API that creates and uses parts for communication over the internal channel. Ports could be created at the hypervisor and also at application availability manager 216 .
  • Internal channel 228 is similar or identical to internal channel 226 , except that internal channel 228 extends between hypervisor 206 and application availability manager 218 .
  • HA cluster manager 220 acting through hypervisor 206 is given enhanced capability to manage and control applications running on guest VMs 208 and 210 .
  • hypervisor 206 is able to send commands through internal channel 226 to the kernel of guest VM 208 , and more particularly to application availability manager 216 thereof. These commands include start, stop and status inquiry commands that pertain to an application 212 . In response to these commands, response codes are sent from manager 216 of guest VM 208 back to hypervisor 206 .
  • hypervisor 206 is able to directly control operation of application 212 , when such application is running on guest VM 208 .
  • hypervisor 206 can send messages through internal channel 226 to availability manager 216 , which request the status or availability of application 212 running on guest VM 208 .
  • Status information provided by availability manager 216 through the internal channel 226 could include error messages logged by the manager 216 , performance information for the application 212 and warning messages such as limited memory capacity. Status information could further include notification that a threshold has been crossed, which pertains to a pre-specified rule associated with application 212 .
  • failures can be detected which have occurred in the running application 212 or in guest VM 208 . Upon detecting one of such failures, corrective action can be taken.
  • applications within VM 208 can be managed and monitored.
  • the HA cluster manager 220 through the link 220 a or the like, is able to obtain state information from the hypervisor 206 in regard to application availability and hardware devices in both guest VM 208 and 210 , This includes availability manager 216 and 218 . In response to accumulating such state information, cluster manager 220 is able to make decisions based on pre-specified rules, for executing commands to hypervisor 206 . Such commands could be executed by running scripts or the like. Accordingly, if a failure was detected in running application 212 , as described above, HA cluster manager 220 could direct hypervisor 206 to start, stop and then restart application 212 on the same guest VM 208 . This action could be readily carried out by sending an appropriate sequence of commands from hypervisor 206 to VM 208 , through internal channel 226 .
  • application 212 could be stopped, and then restarted to run on a different guest VM of node 200 , e.g., guest VM 210 .
  • application 212 could be stopped, and then restarted to run on a guest VM that was located on a node of node cluster 100 other than node 200 .
  • the guest VM could be either guest VM 208 , or a different guest VM.
  • actions of the above types are referred to as “failover” and “implementing failover”.
  • Component 234 is responsive to commands sent through internal channel 226 , to cause data to be selectively exchanged between guest VM 208 and filesystem 222 of computing platform 202 .
  • a similar I/O Device Requests component 236 is associated with I/O emulator 232 and is connected to internal channel 228 .
  • a component such as component 234 functions as a replication sender. That is, it causes incoming data and other data associated with the running application 212 to be replicated and stored, such as by filesystem 222 or the like.
  • a component such as component 236 of guest VM 210 functions as a replication receiver.
  • the replication receiver is adapted to receive or keep track of data that has been replicated by the replication sender. Then, if a failure as described above occurs, application 212 may be stopped on guest VM 208 and started on guest VM 210 .
  • a command is sent to component 236 through internal channel 228 , from manager 220 , wherein the command directs component 236 to perform the function of replication sender rather than replication receiver.
  • Component 234 is similarly instructed to perform the function of replication receiver.
  • Data replicated by replicating sender 236 may then be routed for storage.
  • the replicated data is kept in memory. If the SAP enqueue service (ENQ) dies and is restarted on the ENQREP's guest, it retrieves its data by memory to memory transfer, which is faster than any storage access.
  • HA cluster manager 220 may be a component of a high availability cluster management product that is potentially quite complex.
  • such product may include complex management scripts and resource configurations.
  • scripts and configurations are all contained in the hypervisor 206 , as illustrated by component 238 .
  • an internal bidirectional channel is established between a hypervisor of a computing platform and each of multiple guest VMs, wherein the hypervisor and guest VMs are included in a node of a node cluster.
  • messages are sent through the internal channel between the hypervisor and one of the guest VMs to manage an application running on the guest VM.
  • Step 306 discloses monitoring messages sent through the internal channel, in order to detect a failure of the application running on the guest VM.
  • corrective action comprises sending commands through the internal channel from the hypervisor to the guest VM, wherein the commands stop, and then restart the application running on the guest VM.
  • decision step 310 if the corrective action of step 308 is successful, so that the detected failure is overcome, the method of FIG. 3 ends. Otherwise, the method proceeds to step 312 .
  • one of several further actions is selected. Each of these initially includes stopping the application running on the guest VM, which is usefully carried out by sending a stop command from the hypervisor to the guest VM through the internal channel.
  • the further actions then respectively comprise running the application on a different guest VM of the same node; running the application and the guest VM on a different node of the cluster; and running the application on a different guest VM of a different node. After taking one of these actions, the method of FIG. 3 ends.
  • FIG. 4 is a block diagram that shows a data processing system in accordance with an illustrative embodiment.
  • Data processing system 400 is an example of a computer, which may be used to implement one or more components of embodiments of the invention, and in which computer usable program code or instructions implementing the related processes may be located for the illustrative embodiments.
  • data processing system 400 includes communications fabric 402 , which provides communications between processor unit 404 , memory 406 , persistent storage 408 , communications unit 410 , input/output (I/O) unit 412 , and display 414 .
  • communications fabric 402 provides communications between processor unit 404 , memory 406 , persistent storage 408 , communications unit 410 , input/output (I/O) unit 412 , and display 414 .
  • Processor unit 404 serves to execute instructions for software that may be loaded into memory 406 .
  • Processor unit 404 may be a set of one or more processors or may be a multi-processor core, depending on the particular implementation. Further, processor unit 404 may be implemented using one or more heterogeneous processor systems, in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit 404 may be a symmetric multi-processor system containing multiple processors of the same type.
  • Memory 406 and persistent storage 408 are examples of storage devices 416 .
  • a storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, data, program code in functional form, and/or other suitable information either on a temporary basis and/or a permanent basis.
  • Memory 406 in these examples, may be, for example, a random access memory, or any other suitable volatile or non-volatile storage device.
  • Persistent storage 408 may take various forms, depending on the particular implementation.
  • persistent storage 408 may contain one or more components or devices.
  • persistent storage 408 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above.
  • the media used by persistent storage 408 may be removable.
  • a removable hard drive may be used for persistent storage 408 .
  • Communications unit 410 in these examples, provides for communication with other data processing systems or devices.
  • communications unit 410 is a network interface card.
  • Communications unit 410 may provide communications through the use of either or both physical and wireless communications links.
  • Input/output unit 412 allows for the input and output of data with other devices that may be connected to data processing system 400 .
  • input/output unit 412 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output unit 412 may send output to a printer.
  • Display 414 provides a mechanism to display information to a user.
  • Instructions for the operating system, applications, and/or programs may be located in storage devices 416 , which are in communication with processor unit 404 through communications fabric 402 .
  • the instructions are in a functional form on persistent storage 408 . These instructions may be loaded into memory 406 for execution by processor unit 404 .
  • the processes of the different embodiments may be performed by processor unit 404 using computer implemented instructions, which may be located in a memory, such as memory 406 .
  • program code In the different embodiments, may be embodied on different physical or computer readable storage media, such as memory 406 or persistent storage 408 .
  • Program code 418 is located in a functional form on computer readable media 420 that is selectively removable and may be loaded onto or transferred to data processing system 400 for execution by processor unit 404 .
  • Program code 418 and computer readable media 420 form computer program product 422 .
  • computer readable media 420 may be computer readable storage media 424 or computer readable signal media 426 .
  • Computer readable storage media 424 may include, for example, an optical or magnetic disc that is inserted or placed into a drive or other device that is part of persistent storage 408 for transfer onto a storage device, such as a hard drive, that is part of persistent storage 408 .
  • Computer readable storage media 424 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory that is connected to data processing system 400 . In some instances, computer readable storage media 424 may not be removable from data processing system 400 .
  • program code 418 may be transferred to data processing system 400 using computer readable signal media 426 .
  • Computer readable signal media 426 may be, for example, a propagated data signal containing program code 418 .
  • Computer readable signal media 426 may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communication links, an optical fiber cable, a coaxial cable, a wire, and/or any other suitable type of communications link.
  • the communications link and/or the connection may be physical or wireless in the illustrative examples.
  • the computer readable media also may take the form of non-tangible media, such as communications links or wireless transmissions containing the program code.
  • program code 418 may be downloaded over a network to persistent storage 408 from another device or data processing system through computer readable signal media 426 for use within data processing system 400 .
  • program code stored in a computer readable storage media in a server data processing system may be downloaded over a network from the server to data processing system 400 .
  • the data processing system providing program code 418 may be a server computer, a client computer, or some other device capable of storing and transmitting program code 418 .
  • data processing system 400 may include organic components integrated with inorganic components and/or may be comprised entirely of organic components excluding a human being.
  • a storage device may be comprised of an organic semiconductor.
  • a storage device in data processing system 400 is any hardware apparatus that may store data.
  • Memory 406 , persistent storage 408 , and computer readable media 420 are examples of storage devices in a tangible form.
  • a bus system may be used to implement communications fabric 402 and may be comprised of one or more buses, such as a system bus or an input/output bus.
  • the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system.
  • a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter.
  • a memory may be, for example, memory 406 or a cache such as found in an interface and memory controller hub that may be present in communications fabric 402 .
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Hardware Redundancy (AREA)
  • Mathematical Physics (AREA)
US13/444,997 2012-04-12 2012-04-12 Providing application based monitoring and recovery for a hypervisor of an ha cluster Abandoned US20130275966A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US13/444,997 US20130275966A1 (en) 2012-04-12 2012-04-12 Providing application based monitoring and recovery for a hypervisor of an ha cluster
US13/589,390 US9110867B2 (en) 2012-04-12 2012-08-20 Providing application based monitoring and recovery for a hypervisor of an HA cluster
GB1414770.6A GB2513282A (en) 2012-04-12 2013-03-26 Providing application based monitoring and recovery for a hypervisor of an HA cluster
CN201380018522.8A CN104205060B (zh) 2012-04-12 2013-03-26 提供用于ha集群的管理程序的基于应用的监控及恢复
DE112013002014.9T DE112013002014B4 (de) 2012-04-12 2013-03-26 Bereitstellen von anwendungsgestützter Überwachung und Wiederherstellung für einen Hypervisor eines HA-Clusters
PCT/IB2013/052388 WO2013153472A1 (en) 2012-04-12 2013-03-26 Providing application based monitoring and recovery for a hypervisor of an ha cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/444,997 US20130275966A1 (en) 2012-04-12 2012-04-12 Providing application based monitoring and recovery for a hypervisor of an ha cluster

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/589,390 Continuation US9110867B2 (en) 2012-04-12 2012-08-20 Providing application based monitoring and recovery for a hypervisor of an HA cluster

Publications (1)

Publication Number Publication Date
US20130275966A1 true US20130275966A1 (en) 2013-10-17

Family

ID=49326183

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/444,997 Abandoned US20130275966A1 (en) 2012-04-12 2012-04-12 Providing application based monitoring and recovery for a hypervisor of an ha cluster
US13/589,390 Active 2032-09-11 US9110867B2 (en) 2012-04-12 2012-08-20 Providing application based monitoring and recovery for a hypervisor of an HA cluster

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/589,390 Active 2032-09-11 US9110867B2 (en) 2012-04-12 2012-08-20 Providing application based monitoring and recovery for a hypervisor of an HA cluster

Country Status (5)

Country Link
US (2) US20130275966A1 (zh)
CN (1) CN104205060B (zh)
DE (1) DE112013002014B4 (zh)
GB (1) GB2513282A (zh)
WO (1) WO2013153472A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130275805A1 (en) * 2012-04-12 2013-10-17 International Business Machines Corporation Providing application based monitoring and recovery for a hypervisor of an ha cluster
US20150019704A1 (en) * 2013-06-26 2015-01-15 Amazon Technologies, Inc. Management of computing sessions
US9515954B2 (en) 2013-03-11 2016-12-06 Amazon Technologies, Inc. Automated desktop placement
US9552366B2 (en) 2013-03-11 2017-01-24 Amazon Technologies, Inc. Automated data synchronization
US20170070406A1 (en) * 2015-09-09 2017-03-09 International Business Machines Corporation Virtual desktop operation and data continuity preservation
US20170192801A1 (en) * 2015-12-31 2017-07-06 International Business Machines Corporation Security application for a guest operating system in a virtual computing environment
US10142406B2 (en) 2013-03-11 2018-11-27 Amazon Technologies, Inc. Automated data center selection
US10313345B2 (en) 2013-03-11 2019-06-04 Amazon Technologies, Inc. Application marketplace for virtual desktops
US10623243B2 (en) 2013-06-26 2020-04-14 Amazon Technologies, Inc. Management of computing sessions
US10686646B1 (en) 2013-06-26 2020-06-16 Amazon Technologies, Inc. Management of computing sessions
US11720393B2 (en) 2016-03-18 2023-08-08 Airwatch Llc Enforcing compliance rules using guest management components

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9208015B2 (en) * 2013-06-18 2015-12-08 Vmware, Inc. Hypervisor remedial action for a virtual machine in response to an error message from the virtual machine
US20150100826A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Fault domains on modern hardware
CN103559124B (zh) * 2013-10-24 2017-04-12 华为技术有限公司 故障快速检测方法及装置
US9213572B2 (en) 2013-12-02 2015-12-15 Vmware, Inc. Interdependent virtual machine management
US9952946B2 (en) 2014-02-04 2018-04-24 Telefonaktiebolaget L M Ericsson (Publ) Managing service availability in a mega virtual machine
CN104036548A (zh) * 2014-07-01 2014-09-10 浪潮(北京)电子信息产业有限公司 Mha集群环境重建方法、装置和***
US10095590B2 (en) * 2015-05-06 2018-10-09 Stratus Technologies, Inc Controlling the operating state of a fault-tolerant computer system
DE102015214376A1 (de) * 2015-07-29 2017-02-02 Robert Bosch Gmbh Verfahren und Vorrichtung zur On-Board-Diagnose bei einem Steuergerät mit einem Hypervisor und mindestens einem unter dem Hypervisor betriebenen Gastsystem
CN106559441B (zh) * 2015-09-25 2020-09-04 华为技术有限公司 一种基于云计算服务的虚拟机监控方法、装置及***
CN108139925B (zh) 2016-05-31 2022-06-03 安华高科技股份有限公司 虚拟机的高可用性
US10127068B2 (en) 2016-06-30 2018-11-13 Amazon Technologies, Inc. Performance variability reduction using an opportunistic hypervisor
US10318311B2 (en) * 2016-06-30 2019-06-11 Amazon Technologies, Inc. Memory allocation techniques at partially-offloaded virtualization managers
CN111309515B (zh) * 2018-12-11 2023-11-28 华为技术有限公司 一种容灾控制方法、装置及***
CN113360395A (zh) * 2021-06-24 2021-09-07 中国电子科技集团公司第十四研究所 一种仿真***实时交互管理技术

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080189468A1 (en) * 2007-02-02 2008-08-07 Vmware, Inc. High Availability Virtual Machine Cluster
US20080307259A1 (en) * 2007-06-06 2008-12-11 Dell Products L.P. System and method of recovering from failures in a virtual machine
US20090070761A1 (en) * 2007-09-06 2009-03-12 O2Micro Inc. System and method for data communication with data link backup
US20100077250A1 (en) * 2006-12-04 2010-03-25 Electronics And Telecommunications Research Instit Ute Virtualization based high availability cluster system and method for managing failure in virtualization based high availability cluster system
US20110191627A1 (en) * 2010-01-29 2011-08-04 Maarten Koning System And Method for Handling a Failover Event
US8117495B2 (en) * 2007-11-26 2012-02-14 Stratus Technologies Bermuda Ltd Systems and methods of high availability cluster environment failover protection
US8171349B2 (en) * 2010-06-18 2012-05-01 Hewlett-Packard Development Company, L.P. Associating a monitoring manager with an executable service in a virtual machine migrated between physical machines
US20130007506A1 (en) * 2011-07-01 2013-01-03 Microsoft Corporation Managing recovery virtual machines in clustered environment
US20130275805A1 (en) * 2012-04-12 2013-10-17 International Business Machines Corporation Providing application based monitoring and recovery for a hypervisor of an ha cluster

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3958714B2 (ja) * 2003-06-16 2007-08-15 ソフトバンクモバイル株式会社 携帯通信機器
US20050132379A1 (en) 2003-12-11 2005-06-16 Dell Products L.P. Method, system and software for allocating information handling system resources in response to high availability cluster fail-over events
JP2005202652A (ja) 2004-01-15 2005-07-28 Canon Inc アプリケーション制御装置、その制御方法及び記憶媒体
US7444538B2 (en) 2004-09-21 2008-10-28 International Business Machines Corporation Fail-over cluster with load-balancing capability
JP4733399B2 (ja) * 2005-01-28 2011-07-27 株式会社日立製作所 計算機システム、計算機、ストレージ装置及び管理端末
JP4809209B2 (ja) 2006-12-28 2011-11-09 株式会社日立製作所 サーバ仮想化環境における系切り替え方法及び計算機システム
US7757116B2 (en) 2007-04-04 2010-07-13 Vision Solutions, Inc. Method and system for coordinated multiple cluster failover
JP5032191B2 (ja) 2007-04-20 2012-09-26 株式会社日立製作所 サーバ仮想化環境におけるクラスタシステム構成方法及びクラスタシステム
US7886183B2 (en) 2008-08-07 2011-02-08 Symantec Operating Corporation Providing fault tolerant storage system to a cluster
US8549364B2 (en) 2009-02-18 2013-10-01 Vmware, Inc. Failure detection and recovery of host computers in a cluster
US8055933B2 (en) 2009-07-21 2011-11-08 International Business Machines Corporation Dynamic updating of failover policies for increased application availability

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100077250A1 (en) * 2006-12-04 2010-03-25 Electronics And Telecommunications Research Instit Ute Virtualization based high availability cluster system and method for managing failure in virtualization based high availability cluster system
US20080189468A1 (en) * 2007-02-02 2008-08-07 Vmware, Inc. High Availability Virtual Machine Cluster
US20080307259A1 (en) * 2007-06-06 2008-12-11 Dell Products L.P. System and method of recovering from failures in a virtual machine
US20090070761A1 (en) * 2007-09-06 2009-03-12 O2Micro Inc. System and method for data communication with data link backup
US8117495B2 (en) * 2007-11-26 2012-02-14 Stratus Technologies Bermuda Ltd Systems and methods of high availability cluster environment failover protection
US20110191627A1 (en) * 2010-01-29 2011-08-04 Maarten Koning System And Method for Handling a Failover Event
US8171349B2 (en) * 2010-06-18 2012-05-01 Hewlett-Packard Development Company, L.P. Associating a monitoring manager with an executable service in a virtual machine migrated between physical machines
US20130007506A1 (en) * 2011-07-01 2013-01-03 Microsoft Corporation Managing recovery virtual machines in clustered environment
US20130275805A1 (en) * 2012-04-12 2013-10-17 International Business Machines Corporation Providing application based monitoring and recovery for a hypervisor of an ha cluster

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9110867B2 (en) * 2012-04-12 2015-08-18 International Business Machines Corporation Providing application based monitoring and recovery for a hypervisor of an HA cluster
US20130275805A1 (en) * 2012-04-12 2013-10-17 International Business Machines Corporation Providing application based monitoring and recovery for a hypervisor of an ha cluster
US10616129B2 (en) 2013-03-11 2020-04-07 Amazon Technologies, Inc. Automated desktop placement
US10142406B2 (en) 2013-03-11 2018-11-27 Amazon Technologies, Inc. Automated data center selection
US9552366B2 (en) 2013-03-11 2017-01-24 Amazon Technologies, Inc. Automated data synchronization
US9515954B2 (en) 2013-03-11 2016-12-06 Amazon Technologies, Inc. Automated desktop placement
US10313345B2 (en) 2013-03-11 2019-06-04 Amazon Technologies, Inc. Application marketplace for virtual desktops
US20150019704A1 (en) * 2013-06-26 2015-01-15 Amazon Technologies, Inc. Management of computing sessions
US10623243B2 (en) 2013-06-26 2020-04-14 Amazon Technologies, Inc. Management of computing sessions
US10686646B1 (en) 2013-06-26 2020-06-16 Amazon Technologies, Inc. Management of computing sessions
US10084674B2 (en) * 2015-09-09 2018-09-25 International Business Machines Corporation Virtual desktop operation and data continuity preservation
US20180359166A1 (en) * 2015-09-09 2018-12-13 International Business Machines Corporation Virtual desktop operation and data continuity preservation
US20170070406A1 (en) * 2015-09-09 2017-03-09 International Business Machines Corporation Virtual desktop operation and data continuity preservation
US10785133B2 (en) * 2015-09-09 2020-09-22 International Business Machines Corporation Virtual desktop operation and data continuity preservation
US10089124B2 (en) * 2015-12-31 2018-10-02 International Business Machines Corporation Security application for a guest operating system in a virtual computing environment
US20170192801A1 (en) * 2015-12-31 2017-07-06 International Business Machines Corporation Security application for a guest operating system in a virtual computing environment
US10691475B2 (en) 2015-12-31 2020-06-23 International Business Machines Corporation Security application for a guest operating system in a virtual computing environment
US11720393B2 (en) 2016-03-18 2023-08-08 Airwatch Llc Enforcing compliance rules using guest management components

Also Published As

Publication number Publication date
DE112013002014T5 (de) 2015-01-08
CN104205060A (zh) 2014-12-10
GB2513282A (en) 2014-10-22
CN104205060B (zh) 2016-01-20
WO2013153472A1 (en) 2013-10-17
US20130275805A1 (en) 2013-10-17
DE112013002014B4 (de) 2019-08-14
GB201414770D0 (en) 2014-10-01
US9110867B2 (en) 2015-08-18

Similar Documents

Publication Publication Date Title
US9110867B2 (en) Providing application based monitoring and recovery for a hypervisor of an HA cluster
US10210061B2 (en) Fault tolerant application storage volumes for ensuring application availability and preventing data loss using forking techniques
JP6285511B2 (ja) 仮想マシンクラスタの監視方法及びシステム
JP5851503B2 (ja) 高可用性仮想機械環境におけるアプリケーションの高可用性の提供
US20190310881A1 (en) Managed orchestration of virtual machine instance migration
Bala et al. Fault tolerance-challenges, techniques and implementation in cloud computing
US20180081770A1 (en) Preventing split-brain scenario in a high-availability cluster
US9058265B2 (en) Automated fault and recovery system
US11880287B2 (en) Management of microservices failover
US20140204734A1 (en) Node device, communication system, and method for switching virtual switch
US9948509B1 (en) Method and apparatus for optimizing resource utilization within a cluster and facilitating high availability for an application
US10339012B2 (en) Fault tolerant application storage volumes for ensuring application availability and preventing data loss using suspend-resume techniques
US9436539B2 (en) Synchronized debug information generation
US10102088B2 (en) Cluster system, server device, cluster system management method, and computer-readable recording medium
US9569316B2 (en) Managing VIOS failover in a single storage adapter environment
WO2013190694A1 (ja) 計算機の復旧方法、計算機システム及び記憶媒体
US9148479B1 (en) Systems and methods for efficiently determining the health of nodes within computer clusters
US10691353B1 (en) Checking of data difference for writes performed via a bus interface to a dual-server storage controller
US10817400B2 (en) Management apparatus and management method
US8843665B2 (en) Operating system state communication
JP5698280B2 (ja) 仮想化装置、通信方法、およびプログラム
Vugt et al. High Availability Clustering and Its Architecture

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARPER, RICHARD E.;MITTELSTAEDT, MARCEL;MUELLER, MARKUS;AND OTHERS;REEL/FRAME:028033/0366

Effective date: 20120410

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION