US20080307259A1 - System and method of recovering from failures in a virtual machine - Google Patents
System and method of recovering from failures in a virtual machine Download PDFInfo
- Publication number
- US20080307259A1 US20080307259A1 US11/759,099 US75909907A US2008307259A1 US 20080307259 A1 US20080307259 A1 US 20080307259A1 US 75909907 A US75909907 A US 75909907A US 2008307259 A1 US2008307259 A1 US 2008307259A1
- Authority
- US
- United States
- Prior art keywords
- virtual machine
- program
- restarting
- running
- management module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000004044 response Effects 0.000 claims abstract description 13
- 238000012544 monitoring process Methods 0.000 claims abstract description 4
- 238000004891 communication Methods 0.000 claims description 9
- 230000036541 health Effects 0.000 claims description 7
- 238000007726 management method Methods 0.000 description 49
- 238000003860 storage Methods 0.000 description 25
- 239000003795 chemical substances by application Substances 0.000 description 18
- 238000012545 processing Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1438—Restarting or rejuvenating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0712—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a virtual computing platform, e.g. logically partitioned systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2046—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage
Definitions
- the present disclosure relates in general to clustered network environments, and more particularly to a system and method of recovering from failures in a virtual machine.
- An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
- information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
- the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
- information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
- Information handling systems including servers, workstations, and other computers, are often grouped into computer networks, including networks having a client-server architecture in which servers may access storage, including shared storage, in response to request from client computers of the network.
- the servers also known as physical hosts, may include one or more virtual machines running on the host operating system and the host software of the physical host. Each virtual machine may comprise a virtual or “guest” OS.
- a single physical host may include multiple virtual machines in which each virtual machine appears as a logical machine on a computer network. The presence of one or more virtual machines on a single physical host provides a separation of the hardware and software of a networked computer system. In certain instances, each virtual machine could be dedicated to the task of handling a single function.
- one virtual machine could be a mail server, while another virtual machine present on the same physical host could be a file server.
- any number of programs, e.g., operating systems and/or applications, may run on each virtual machine.
- a system may include a management module operable to determine the occurrence of a program failure in a virtual machine, and further operable to restart the program in response to the failure.
- a method for recovering from failures in a virtual machine may include, in a first physical host having a host operating system and a virtual machine running on the host operating system, monitoring one or more parameters associated with a program running on the virtual machine, each parameter having a predetermined acceptable range. The method may further include determining if the one or more parameters are within their respective predetermined acceptable ranges. In response to determining that the one or more parameters associated with the program running on the virtual machine are not within their respective predetermined acceptable ranges, a management module may cause the application running on the virtual machine to be restarted.
- a system for recovering from failures in a virtual machine may include a first physical host.
- the first physical host may include a host operating system, a management module in communication with the host operating system, and a virtual machine running on the host operating system and in communication with the management module.
- the virtual machine may be operable to run a program and run an agent.
- the agent may be operable to communicate to the management module one or more parameters associated with the program, each parameter having a predetermined acceptable range.
- the management module may be operable to determine if the one or more parameters associated with the program running on the virtual machine are within their respective predetermined acceptable ranges, and in response to determining that the one or more parameters are not within their respective predetermined acceptable ranges, cause the application running on the virtual machine to be restarted.
- an information handling system may include a processor, a memory communicatively coupled to the processor, a management module communicatively coupled to the memory and the processor, and a host operating system running on the information handling system and having a virtual machine running thereon.
- the virtual machine may be in communication with the management module and may be operable to run a program and run an agent.
- the agent may be operable to communicate to the management module one or more parameters associated with the program, each parameter having a predetermined acceptable range.
- the management module may be operable to determine if the one or more parameters associated with the program running on the virtual machine are within their respective predetermined acceptable ranges, and in response to determining that the one or more parameters are not within their respective predetermined acceptable ranges, cause the application running on the virtual machine to be restarted.
- FIG. 1 illustrates a block diagram of an example system for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure
- FIG. 2 illustrates a flow chart of a method for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure
- FIG. 3 illustrates the block diagram of the system of FIG. 1 , demonstrating the restarting of a program by re-instantiating a virtual machine on a physical host, in accordance with the present disclosure
- FIG. 4 illustrates the block diagram of the system of FIG. 1 , demonstrating the restarting of a program by re-instantiating a virtual machine on a second physical host, in accordance with the present disclosure.
- FIGS. 1 through 4 Preferred embodiments and their advantages are best understood by reference to FIGS. 1 through 4 , wherein like numbers are used to indicate like and corresponding parts.
- an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes.
- an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
- the information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory.
- Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display.
- the information handling system may also include one or more buses operable to transmit communications between the various hardware components.
- FIG. 1 illustrates a block diagram of an example system 100 for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure.
- system 100 may include physical hosts 102 a and 102 b (which may be referred to generally as hosts 102 ), network 124 , and network storage 126 .
- Host devices 102 may include one or more information handling systems, as defined herein, and may be communicatively coupled to network 124 .
- Host devices 102 may be any type of processing device and may provide any type of functionality associated with an information handling system, including without limitation database management, transaction processing, storage, printing, or web server functionality.
- Network 124 may be a local area network (LAN), a metropolitan area network (MAN), storage area network (SAN), a wide area network (WAN), a wireless local area network (WLAN), a virtual private network (VPN), an intranet, the Internet or any other appropriate architecture or system that facilitates the communication of signals, data and/or messages (generally referred to as media).
- Network 124 may transmit media using the Fibre Channel (FC) standard, Frame Relay, Asynchronous Transfer Mode (ATM), Internet protocol (IP), other packet-based protocol, and/or any other transmission protocol and/or standard for transmitting media over a network.
- FC Fibre Channel
- ATM Asynchronous Transfer Mode
- IP Internet protocol
- IP Internet protocol
- Network storage 126 may be communicatively coupled to network 124 .
- Network storage 126 may include any system, device, or apparatus operable to store media transmitted over network 124 .
- Network storage 126 may include, for example, network attached storage, one or more direct access storage devices (e.g. hard disk drives), and/or one or more sequential access storage devices (e.g. tape drives).
- network storage 126 may be SCSI, iSCSI, SAS and/or Fibre Channel based storage.
- physical hosts 102 may include a processor 104 , a memory 106 , local storage 108 , a management module 110 , and a host operating system 111 .
- physical hosts 102 may host one or more virtual machines 112 , 118 running on host operating system 111 .
- Processor 104 may be any suitable system, device or apparatus operable to interpret program instructions and process data in an information handling system.
- Processor 104 may include, without limitation, a central processing unit, a microprocessor, a microcontroller, a digital signal processor and/or an application-specific integrated circuits (ASICs).
- ASICs application-specific integrated circuits
- Processors may be suitable for any number of applications, including use in personal computers, computer peripherals, handheld computing devices, or in embedded systems incorporated into electronic or electromechanical devices such as cameras, mobile phones, audio-visual equipment, medical devices, automobiles and home appliances.
- Memory 106 may be communicatively coupled to processor 104 .
- Memory 16 may be any system, device or apparatus operable to store and maintain media.
- memory 106 may include (data and/or instructions used by processor 104 .
- Memory 106 may include random access memory (RAM), electronically erasable programmable read-only memory (EEPROM), a PCMCIA card, flash memory, and/or any suitable selection and/or array of volatile or non-volatile memory.
- RAM random access memory
- EEPROM electronically erasable programmable read-only memory
- PCMCIA card PCMCIA card
- flash memory and/or any suitable selection and/or array of volatile or non-volatile memory.
- Local storage 108 may be communicatively coupled to processor 104 .
- Local storage 108 may include any system, device, or apparatus operable to store media processed by processor 104 .
- Local storage 108 may include, for example, network attached storage, one or more direct access storage devices (e.g. hard disk drives), and/or one or more sequential access storage devices (e.g. tape drives).
- Management module 110 may be coupled to processor 104 , and may be any system, device or apparatus operable to monitor and/or receive information from virtual machines 112 , 118 and/or programs 116 , 122 running on virtual machines 112 , 118 , as discussed in greater detail below. Management module 110 may also be operable to manage virtual machines 112 and 118 instantiated on physical hosts 102 , including without limitation terminating and/or creating instantiations of virtual machines 112 and 118 , as discussed in greater detail below. Management module 110 may be implemented using hardware, software, or any combination thereof. In some embodiments, management module 110 may run on host operating system 111 . In other embodiments, management module 110 may run independently of host operating system 111 .
- virtual machines 112 and 118 may each operate as a self-contained operating environment that behaves as if it is a separate computer. Virtual machines 112 and 118 may work in conjunction with, yet independent of host operating system 111 operating on physical host 102 . In certain embodiments, each virtual machine could be dedicated to the task of handling a single function. For example, in a particular embodiment, virtual machine 112 could be a mail server, while virtual machine 118 present on the same physical host 102 a could be a file server. In the same or alternative embodiments, virtual machine 112 could operate using a particular operating system (e.g., Windows®), while virtual machine 118 present on the same physical host 102 may operate using a different operating system (e.g. Mac OS®).
- Windows® e.g., Windows®
- Mac OS® e.g. Mac OS®
- the host operating system operating on physical host 102 a may operate using a different operating system than the operating systems operating on virtual machines 112 , 118 present on physical host 102 a .
- physical host 102 a may operate using UNIX®, while virtual machine 112 may operate using Windows®, and virtual machine 118 may operate using Mac OS®.
- Each virtual machine 112 , 118 may include an agent 114 , and programs including a guest operating system 115 , and one or more applications 116 .
- program may be used to refer to any set of instructions embodied in a computer-readable medium and executable by an information handling system, and may include, without limitation, operating systems and applications.
- guest operating system may be any program that manages other programs of a virtual machine, and interfaces with a host operating system running on a physical host 102 .
- application refers to any program operable to run on a guest operating system that may be written to perform one or more particular tasks or functions (e.g., word processing, database management, spreadsheets, desktop publishing, graphics, finance, education, telecommunication, inventory control, payroll management, Internet browsing and/or others).
- tasks or functions e.g., word processing, database management, spreadsheets, desktop publishing, graphics, finance, education, telecommunication, inventory control, payroll management, Internet browsing and/or others.
- Agent 114 may be any system, device or apparatus operable to monitor one or programs 115 , 116 running on a virtual machine 112 , 118 , and/or send messages to a management module 110 , as described in greater detail below. Agent 114 may be implemented using hardware, software, or any combination thereof.
- management module 110 may monitor one or more parameters associated with a program 115 , 116 running on a virtual machine 112 , 118 .
- agent 114 associated with each virtual machine 112 , 118 may monitor parameters indicative of the resource utilization of a program 115 , 116 , such as processor utilization, memory utilization, disk utilization, and/or network utilization, for example.
- agent 114 may monitor parameters related to the “health” of a program 115 , 116 , such as whether the program is running and/or whether the program has access to required resources and/or services.
- Each agent 114 may communicate to its associated management module 110 regarding the monitored parameters.
- management module 110 may also monitor any number parameters related to a virtual machine 112 , 118 or a program 114 , 115 running thereon, including those program parameters monitored by agents 114 . For example, management module may monitor whether or not an agent 114 is running on a virtual machine 112 , 118 . If management module 110 determines an agent 114 is not running on a virtual machine 112 , 118 , this may indicate a problem or failure associated with the particular virtual machine 112 , 118 .
- Management module 110 may be further operable to determine if the one or more monitored parameters are within a respective predetermined acceptable range.
- a respective predetermined acceptable range for a particular parameter may be any suitable range of numerical or logical values.
- a predetermined acceptable range for processor utilization of a particular program 115 , 116 may be a range of percentage values.
- another parameter may indicate whether a particular program 115 , 116 is running on a virtual machine 112 , 118 , and may have a logical value of “yes” or “true” or to indicate the program is running, and a logical value of “no” or “false” to otherwise indicate that the program is not running.
- the predetermined acceptable range for the parameter may be the logical value “yes” or “true.”
- a predetermined acceptable range for a parameter may be set automatically or manually.
- one or more predetermined acceptable ranges may be determined by a manufacturer.
- one or more predetermined acceptable ranges may be determined by a user and/or system administrator.
- one or more predetermined acceptable ranges may be based on the types of computing resources comprising system 100 .
- one or more predetermined ranges may be based on processing capacity, storage capacity, type of storage, memory capacity, network capacity, type of network, operating system, application, and/or any other number of suitable factors.
- a parameter associated with a program falling outside of the parameter's respective predetermined acceptable range may indicate a failure of the program. For instance, a determination that processor usage by a particular program is excessive may indicate a failure in such program.
- the term “failure” includes actual failures, potential failures, impending failures and/or any other similar event.
- management module 110 may trigger an event.
- An event may include any action and/or response within system 100 that may cure a failure indicated by a parameter not falling within its predetermined acceptable range.
- an event may comprise management module 110 or another component of system 100 issuing notification to a user and/or system administrator, such as an alert and/or e-mail message, for example.
- an event may comprise the allocation of more computing resources (e.g. processor capacity, memory capacity, storage capacity and/or network capacity) to a virtual machine 112 , 118 and/or a program 115 , 116 running thereon.
- management module 110 may cause host 102 a to allocate more memory to program 115 , 116 .
- An event may also comprise the instantiation of a new virtual machine 112 , 118 and/or program 115 , 116 .
- an event may comprise restarting a program 115 , 116 .
- management module 110 detects a failure of a program 115 , 116 running on virtual machine 112 , it may cause the program 115 , 116 to be terminated and restarted on the same virtual machine 112 .
- a management module 110 may cause the re-instantiation of the virtual machine 112 on host 102 a , and cause the program 115 , 116 to be restarted on the re-instantiated virtual machine 112 (as depicted in FIG. 3 ).
- a management module 110 may detect the re-instantiation of the virtual machine 112 on host 102 b , and cause the program 115 , 116 to be restarted on the re-instantiated virtual machine 112 (as depicted in FIG. 4 ).
- FIG. 1 depicts a system 100 comprising two hosts 102 a and 102 b , it is understood that system 100 may comprise any number of hosts 102 .
- FIG. 1 depicts host 102 a comprising virtual machines 112 and 118 , it is understood that hosts 102 may comprise any number of virtual machines.
- FIG. 1 depicts one guest operating system 115 and one application 116 running on each of virtual machines 112 and 118 , it is understood that any number of programs 115 , 116 may run on virtual machines 112 , 118 .
- virtual machines 112 and 118 are depicted as comprising agents 114 , it is understood that agents 114 may be implemented independently of virtual machines 112 , 118 .
- application 116 is depicted as running on guest operating system 114 , it is understood that application 116 may run independently of guest operating system 115 .
- FIG. 2 illustrates a flow chart of an example method 200 for recovering from failures in a virtual machine environment.
- method 200 includes monitoring one or more parameters associated with a program 115 , 116 running on a virtual machine 112 , 118 and triggering an event if the one or more of the monitored parameters fall outside the predetermined acceptable range.
- method 200 preferably begins at step 202 .
- Teachings of the present disclosure may be implemented in a variety of configurations of system 100 . As such, the preferred initialization point for method 200 and the order and identity of the steps 202 - 226 comprising method 200 may depend on the implementation chosen.
- agent 114 , management module 110 , or another component system 100 may monitor one or more parameters associated with a program 115 , 116 running on a virtual machine 118 instantiated on physical host 102 a .
- management module 110 or another component of system 100 may determine if any of the one or more monitored parameters has not been received over a predetermined time period. For example, management module 110 may determine whether or not agent 114 has failed to communicate a particular parameter value to the management module for a predetermined time period. The predetermined time period or “timeout” period, may be any suitable length of time, and may be automatically or manually determined. Failure of management module 110 to receive a particular parameter value may indicate a failure of virtual machine 118 or a program 115 , 116 running thereon.
- management module 110 or another component of system 100 may determine if the one or more parameters are within their respective predetermined acceptable ranges, as discussed in greater detail above with respect to FIG. 1 . If it is determined that all of the parameters are being received are and are within their respective predetermined acceptable ranges, method 200 may, at step 206 , proceed again to step 202 , in which case the loop of steps 202 - 206 may repeat until a parameter is determined to be outside of its respective predetermined acceptable range. Alternatively, if one or more monitored parameters are not within their predetermined acceptable ranges, method 200 may, at step 206 , proceed to step 208 .
- management module 110 or another component of system 100 may trigger and/or execute one or more events in response to a determination that a parameter is not within its respective predetermined acceptable range. For example, at step 208 , management module 110 or another component of system 100 may send a notification (such as an alert or email, for example) to a user and/or system administrator that one or more parameters are not within their respective predetermined acceptable ranges.
- management module 110 or another component of system 100 may attempt to allocate more computing resources to the program 115 , 116 . For example, more processor capacity, memory capacity, storage capacity, network capacity and/or other resource may be allocated to program 115 , 116 .
- management module 110 or another component of system 100 may make a determination of whether the allocation of more resources to program 115 , 116 was successful in bringing all monitored parameters within their respective predetermined acceptable ranges. If successful, method 200 may proceed again to step 202 where the parameters may continue to be monitored. On the other hand, if the allocation of additional resources 115 , 116 was not successful, method 200 may proceed to step 214 .
- management module 110 or another component of system 100 may attempt to terminate program 115 , 116 and restart it on the same virtual machine 118 . If the attempt is successful in bringing all monitored parameters within their respective predetermined acceptable ranges, method 200 may, at step 215 , proceed again to step 202 where the parameters may continue to be monitored. Otherwise, method 200 may, at step 215 , proceed to step 216 .
- management module 110 or another component of system 100 may perform a hard restart of virtual machine 118 on the same host 102 a .
- a hard restart of virtual machine 118 may comprise shutting down virtual machine and powering it up again.
- management module 110 or another component of system 100 may restart program 115 , 116 on the restarted virtual machine 118 . If this restart of program 115 , 116 is successful in bringing all monitored parameters within their respective predetermined acceptable ranges, method 200 may, at step 218 , proceed again to step 202 where the parameters may continue to be monitored. Otherwise, method 200 may, at step 218 , proceed to step 219 .
- management module 110 or another component of system 100 may re-instantiate virtual machine 118 as virtual machine 128 on the same host 102 a , as depicted in FIG. 3 .
- management module 110 or another component of system 100 may restart program 115 , 116 as program 131 , 132 on the re-instantiated virtual machine 128 . If this restart of program 115 , 116 as program 131 , 132 is successful in bringing all monitored parameters within their respective predetermined acceptable ranges, method 200 may, at step 222 , proceed again to step 202 where the parameters may continue to be monitored. Otherwise, method 200 may, at step 222 , proceed to step 224 .
- management module 110 or another component of system 100 may re-instantiate virtual machine 118 as virtual machine 132 on a second host 102 b , as depicted in FIG. 4 .
- management module 110 or another component of system 100 may restart program 115 , 116 as program 137 , 138 on the re-instantiated virtual machine 132 .
- FIG. 2 discloses a particular number of steps to be taken with respect to method 200 , it is understood that method 200 may be executed with greater or lesser steps than those depicted in FIG. 2 . For example, in certain embodiments of method 200 , steps 208 - 212 may not be executed.
- Method 200 may be implemented using system 100 or any other system operable to implement method 200 . In certain embodiments, method 200 may be implemented in software embodied in tangible computer readable media.
Abstract
Description
- The present disclosure relates in general to clustered network environments, and more particularly to a system and method of recovering from failures in a virtual machine.
- As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
- Information handling systems, including servers, workstations, and other computers, are often grouped into computer networks, including networks having a client-server architecture in which servers may access storage, including shared storage, in response to request from client computers of the network. The servers, also known as physical hosts, may include one or more virtual machines running on the host operating system and the host software of the physical host. Each virtual machine may comprise a virtual or “guest” OS. A single physical host may include multiple virtual machines in which each virtual machine appears as a logical machine on a computer network. The presence of one or more virtual machines on a single physical host provides a separation of the hardware and software of a networked computer system. In certain instances, each virtual machine could be dedicated to the task of handling a single function. For example, in a particular embodiment, one virtual machine could be a mail server, while another virtual machine present on the same physical host could be a file server. In addition, any number of programs, e.g., operating systems and/or applications, may run on each virtual machine.
- In many computer systems, it is often desirable to reduce downtime or inaccessibility caused by failure of a physical host, virtual machine, or a program. However, conventional approaches to diagnosing and recovering from failures address only “hard” failures occurring in the host operating system of a physical host, or a physical failure of the physical host. These traditional approaches do not provide automated methods of diagnosing “soft” failures, such as those failures occurring inside a virtual machine, such as a guest operating system failure or failure of another program running on the virtual machine. Accordingly, systems and methods that provide for diagnosis and recovery of software and operating system failures occurring in virtual machines are desired.
- In accordance with the teachings of the present disclosure, disadvantages and problems associated with diagnosis and recovery of failures in a virtual machine may be substantially reduced or eliminated. For example, the systems and methods disclosed herein may be technically advantageous because they may provide for the recovery of “soft” failures occurring in a virtual machine, while conventional approaches generally provide only for the recovery of “hard” failures of a physical host machine. In a particular embodiment, a system may include a management module operable to determine the occurrence of a program failure in a virtual machine, and further operable to restart the program in response to the failure.
- In accordance with one embodiment of the present disclosure, a method for recovering from failures in a virtual machine is provided. The method may include, in a first physical host having a host operating system and a virtual machine running on the host operating system, monitoring one or more parameters associated with a program running on the virtual machine, each parameter having a predetermined acceptable range. The method may further include determining if the one or more parameters are within their respective predetermined acceptable ranges. In response to determining that the one or more parameters associated with the program running on the virtual machine are not within their respective predetermined acceptable ranges, a management module may cause the application running on the virtual machine to be restarted.
- In accordance with another embodiment of the present disclosure, a system for recovering from failures in a virtual machine may include a first physical host. The first physical host may include a host operating system, a management module in communication with the host operating system, and a virtual machine running on the host operating system and in communication with the management module. The virtual machine may be operable to run a program and run an agent. The agent may be operable to communicate to the management module one or more parameters associated with the program, each parameter having a predetermined acceptable range. The management module may be operable to determine if the one or more parameters associated with the program running on the virtual machine are within their respective predetermined acceptable ranges, and in response to determining that the one or more parameters are not within their respective predetermined acceptable ranges, cause the application running on the virtual machine to be restarted.
- In accordance with a further embodiment of the present disclosure, an information handling system may include a processor, a memory communicatively coupled to the processor, a management module communicatively coupled to the memory and the processor, and a host operating system running on the information handling system and having a virtual machine running thereon. The virtual machine may be in communication with the management module and may be operable to run a program and run an agent. The agent may be operable to communicate to the management module one or more parameters associated with the program, each parameter having a predetermined acceptable range. The management module may be operable to determine if the one or more parameters associated with the program running on the virtual machine are within their respective predetermined acceptable ranges, and in response to determining that the one or more parameters are not within their respective predetermined acceptable ranges, cause the application running on the virtual machine to be restarted.
- Other technical advantages will be apparent to those of ordinary skill in the art in view of the following specification, claims, and drawings.
- A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
-
FIG. 1 illustrates a block diagram of an example system for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure; -
FIG. 2 illustrates a flow chart of a method for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure; -
FIG. 3 illustrates the block diagram of the system ofFIG. 1 , demonstrating the restarting of a program by re-instantiating a virtual machine on a physical host, in accordance with the present disclosure; and -
FIG. 4 illustrates the block diagram of the system ofFIG. 1 , demonstrating the restarting of a program by re-instantiating a virtual machine on a second physical host, in accordance with the present disclosure. - Preferred embodiments and their advantages are best understood by reference to
FIGS. 1 through 4 , wherein like numbers are used to indicate like and corresponding parts. - For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
-
FIG. 1 illustrates a block diagram of anexample system 100 for recovering from failures in a virtual machine, in accordance with teachings of the present disclosure. As shown inFIG. 1 ,system 100 may includephysical hosts network 124, andnetwork storage 126. Host devices 102 may include one or more information handling systems, as defined herein, and may be communicatively coupled tonetwork 124. Host devices 102 may be any type of processing device and may provide any type of functionality associated with an information handling system, including without limitation database management, transaction processing, storage, printing, or web server functionality. - Although a specific network is illustrated in
FIG. 1 , the term “network” should be interpreted as generically defining any network capable of transmitting telecommunication signals, data and/or messages.Network 124 may be a local area network (LAN), a metropolitan area network (MAN), storage area network (SAN), a wide area network (WAN), a wireless local area network (WLAN), a virtual private network (VPN), an intranet, the Internet or any other appropriate architecture or system that facilitates the communication of signals, data and/or messages (generally referred to as media). Network 124 may transmit media using the Fibre Channel (FC) standard, Frame Relay, Asynchronous Transfer Mode (ATM), Internet protocol (IP), other packet-based protocol, and/or any other transmission protocol and/or standard for transmitting media over a network. -
Network storage 126 may be communicatively coupled tonetwork 124.Network storage 126 may include any system, device, or apparatus operable to store media transmitted overnetwork 124.Network storage 126 may include, for example, network attached storage, one or more direct access storage devices (e.g. hard disk drives), and/or one or more sequential access storage devices (e.g. tape drives). In certain embodiments,network storage 126 may be SCSI, iSCSI, SAS and/or Fibre Channel based storage. - As depicted in
FIG. 1 , physical hosts 102 may include a processor 104, a memory 106, local storage 108, a management module 110, and a host operating system 111. In addition, physical hosts 102 may host one or morevirtual machines - Memory 106 may be communicatively coupled to processor 104. Memory 16 may be any system, device or apparatus operable to store and maintain media. For example, memory 106 may include (data and/or instructions used by processor 104. Memory 106 may include random access memory (RAM), electronically erasable programmable read-only memory (EEPROM), a PCMCIA card, flash memory, and/or any suitable selection and/or array of volatile or non-volatile memory.
- Local storage 108 may be communicatively coupled to processor 104. Local storage 108 may include any system, device, or apparatus operable to store media processed by processor 104. Local storage 108 may include, for example, network attached storage, one or more direct access storage devices (e.g. hard disk drives), and/or one or more sequential access storage devices (e.g. tape drives).
- Management module 110 may be coupled to processor 104, and may be any system, device or apparatus operable to monitor and/or receive information from
virtual machines programs 116, 122 running onvirtual machines virtual machines virtual machines - Generally speaking,
virtual machines Virtual machines virtual machine 112 could be a mail server, whilevirtual machine 118 present on the samephysical host 102 a could be a file server. In the same or alternative embodiments,virtual machine 112 could operate using a particular operating system (e.g., Windows®), whilevirtual machine 118 present on the same physical host 102 may operate using a different operating system (e.g. Mac OS®). In the same or alternative embodiments, the host operating system operating onphysical host 102 a may operate using a different operating system than the operating systems operating onvirtual machines physical host 102 a. For example,physical host 102 a may operate using UNIX®, whilevirtual machine 112 may operate using Windows®, andvirtual machine 118 may operate using Mac OS®. - Each
virtual machine agent 114, and programs including aguest operating system 115, and one ormore applications 116. As used in this disclosure, the term “program” may be used to refer to any set of instructions embodied in a computer-readable medium and executable by an information handling system, and may include, without limitation, operating systems and applications. As used in this disclosure, “guest operating system” may be any program that manages other programs of a virtual machine, and interfaces with a host operating system running on a physical host 102. As used in this disclosure, “application” refers to any program operable to run on a guest operating system that may be written to perform one or more particular tasks or functions (e.g., word processing, database management, spreadsheets, desktop publishing, graphics, finance, education, telecommunication, inventory control, payroll management, Internet browsing and/or others). -
Agent 114 may be any system, device or apparatus operable to monitor one orprograms virtual machine Agent 114 may be implemented using hardware, software, or any combination thereof. - In operation, management module 110, along with
agents 114 associated with eachvirtual machine program virtual machine agent 114 associated with eachvirtual machine program agent 114 may monitor parameters related to the “health” of aprogram - Each
agent 114 may communicate to its associated management module 110 regarding the monitored parameters. In addition, management module 110 may also monitor any number parameters related to avirtual machine program agents 114. For example, management module may monitor whether or not anagent 114 is running on avirtual machine agent 114 is not running on avirtual machine virtual machine - Management module 110 may be further operable to determine if the one or more monitored parameters are within a respective predetermined acceptable range. A respective predetermined acceptable range for a particular parameter may be any suitable range of numerical or logical values. For example, a predetermined acceptable range for processor utilization of a
particular program particular program virtual machine - A predetermined acceptable range for a parameter may be set automatically or manually. In certain embodiments, one or more predetermined acceptable ranges may be determined by a manufacturer. In the same or alternative embodiments, one or more predetermined acceptable ranges may be determined by a user and/or system administrator. In the same or alternative embodiments, one or more predetermined acceptable ranges may be based on the types of computing
resources comprising system 100. For example, one or more predetermined ranges may be based on processing capacity, storage capacity, type of storage, memory capacity, network capacity, type of network, operating system, application, and/or any other number of suitable factors. - The existence of a parameter associated with a program falling outside of the parameter's respective predetermined acceptable range may indicate a failure of the program. For instance, a determination that processor usage by a particular program is excessive may indicate a failure in such program. As used in this disclosure, the term “failure” includes actual failures, potential failures, impending failures and/or any other similar event.
- In response to determining that one or more parameters associated with a program are not within their predetermined acceptable ranges, management module 110 may trigger an event. An event may include any action and/or response within
system 100 that may cure a failure indicated by a parameter not falling within its predetermined acceptable range. For example, an event may comprise management module 110 or another component ofsystem 100 issuing notification to a user and/or system administrator, such as an alert and/or e-mail message, for example. In addition, an event may comprise the allocation of more computing resources (e.g. processor capacity, memory capacity, storage capacity and/or network capacity) to avirtual machine program program program - An event may also comprise the instantiation of a new
virtual machine program program program virtual machine 112, it may cause theprogram virtual machine 112. Alternatively, if a management module 110 detects a failure of aprogram virtual machine 112, it may cause the re-instantiation of thevirtual machine 112 onhost 102 a, and cause theprogram FIG. 3 ). Alternatively, if a management module 110 detects a failure of aprogram virtual machine 112, it may cause the re-instantiation of thevirtual machine 112 onhost 102 b, and cause theprogram FIG. 4 ). - Although
FIG. 1 depicts asystem 100 comprising twohosts system 100 may comprise any number of hosts 102. In addition, althoughFIG. 1 depictshost 102 a comprisingvirtual machines FIG. 1 depicts oneguest operating system 115 and oneapplication 116 running on each ofvirtual machines programs virtual machines - Although
virtual machines agents 114, it is understood thatagents 114 may be implemented independently ofvirtual machines application 116 is depicted as running onguest operating system 114, it is understood thatapplication 116 may run independently ofguest operating system 115. -
FIG. 2 illustrates a flow chart of anexample method 200 for recovering from failures in a virtual machine environment. In one embodiment,method 200 includes monitoring one or more parameters associated with aprogram virtual machine - According to one embodiment,
method 200 preferably begins atstep 202. Teachings of the present disclosure may be implemented in a variety of configurations ofsystem 100. As such, the preferred initialization point formethod 200 and the order and identity of the steps 202-226 comprisingmethod 200 may depend on the implementation chosen. - At
step 202,agent 114, management module 110, or anothercomponent system 100 may monitor one or more parameters associated with aprogram virtual machine 118 instantiated onphysical host 102 a. Atstep 203, management module 110 or another component ofsystem 100 may determine if any of the one or more monitored parameters has not been received over a predetermined time period. For example, management module 110 may determine whether or notagent 114 has failed to communicate a particular parameter value to the management module for a predetermined time period. The predetermined time period or “timeout” period, may be any suitable length of time, and may be automatically or manually determined. Failure of management module 110 to receive a particular parameter value may indicate a failure ofvirtual machine 118 or aprogram - At
step 204, management module 110 or another component ofsystem 100 may determine if the one or more parameters are within their respective predetermined acceptable ranges, as discussed in greater detail above with respect toFIG. 1 . If it is determined that all of the parameters are being received are and are within their respective predetermined acceptable ranges,method 200 may, atstep 206, proceed again to step 202, in which case the loop of steps 202-206 may repeat until a parameter is determined to be outside of its respective predetermined acceptable range. Alternatively, if one or more monitored parameters are not within their predetermined acceptable ranges,method 200 may, atstep 206, proceed to step 208. - At steps 208-226, management module 110 or another component of
system 100 may trigger and/or execute one or more events in response to a determination that a parameter is not within its respective predetermined acceptable range. For example, atstep 208, management module 110 or another component ofsystem 100 may send a notification (such as an alert or email, for example) to a user and/or system administrator that one or more parameters are not within their respective predetermined acceptable ranges. Atstep 210, management module 110 or another component ofsystem 100 may attempt to allocate more computing resources to theprogram program - At
step 212, management module 110 or another component ofsystem 100 may make a determination of whether the allocation of more resources toprogram method 200 may proceed again to step 202 where the parameters may continue to be monitored. On the other hand, if the allocation ofadditional resources method 200 may proceed to step 214. - At
step 214, management module 110 or another component ofsystem 100 may attempt to terminateprogram virtual machine 118. If the attempt is successful in bringing all monitored parameters within their respective predetermined acceptable ranges,method 200 may, atstep 215, proceed again to step 202 where the parameters may continue to be monitored. Otherwise,method 200 may, atstep 215, proceed to step 216. - At
step 216, management module 110 or another component ofsystem 100 may perform a hard restart ofvirtual machine 118 on thesame host 102 a. A hard restart ofvirtual machine 118 may comprise shutting down virtual machine and powering it up again. Atstep 217, management module 110 or another component ofsystem 100 may restartprogram virtual machine 118. If this restart ofprogram method 200 may, atstep 218, proceed again to step 202 where the parameters may continue to be monitored. Otherwise,method 200 may, atstep 218, proceed to step 219. - At
step 219, management module 110 or another component ofsystem 100 may re-instantiatevirtual machine 118 asvirtual machine 128 on thesame host 102 a, as depicted inFIG. 3 . Atstep 220, management module 110 or another component ofsystem 100 may restartprogram program virtual machine 128. If this restart ofprogram program method 200 may, atstep 222, proceed again to step 202 where the parameters may continue to be monitored. Otherwise,method 200 may, atstep 222, proceed to step 224. - At
step 224, management module 110 or another component ofsystem 100 may re-instantiatevirtual machine 118 asvirtual machine 132 on asecond host 102 b, as depicted inFIG. 4 . Atstep 226, management module 110 or another component ofsystem 100 may restartprogram program virtual machine 132. - Although
FIG. 2 discloses a particular number of steps to be taken with respect tomethod 200, it is understood thatmethod 200 may be executed with greater or lesser steps than those depicted inFIG. 2 . For example, in certain embodiments ofmethod 200, steps 208-212 may not be executed.Method 200 may be implemented usingsystem 100 or any other system operable to implementmethod 200. In certain embodiments,method 200 may be implemented in software embodied in tangible computer readable media. - Although the present disclosure has been described in detail, it should be understood that various changes, substitutions, and alterations can be made hereto without departing from the spirit and the scope of the invention as defined by the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/759,099 US7797587B2 (en) | 2007-06-06 | 2007-06-06 | System and method of recovering from failures in a virtual machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/759,099 US7797587B2 (en) | 2007-06-06 | 2007-06-06 | System and method of recovering from failures in a virtual machine |
Publications (2)
Publication Number | Publication Date |
---|---|
US20080307259A1 true US20080307259A1 (en) | 2008-12-11 |
US7797587B2 US7797587B2 (en) | 2010-09-14 |
Family
ID=40096983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/759,099 Active 2028-07-17 US7797587B2 (en) | 2007-06-06 | 2007-06-06 | System and method of recovering from failures in a virtual machine |
Country Status (1)
Country | Link |
---|---|
US (1) | US7797587B2 (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100058307A1 (en) * | 2008-08-26 | 2010-03-04 | Dehaan Michael Paul | Methods and systems for monitoring software provisioning |
US20110041126A1 (en) * | 2009-08-13 | 2011-02-17 | Levy Roger P | Managing workloads in a virtual computing environment |
US20110154101A1 (en) * | 2009-12-22 | 2011-06-23 | At&T Intellectual Property I, L.P. | Infrastructure for rapid service deployment |
US20110225467A1 (en) * | 2010-03-12 | 2011-09-15 | International Business Machines Corporation | Starting virtual instances within a cloud computing environment |
WO2012016175A1 (en) * | 2010-07-30 | 2012-02-02 | Symantec Corporation | Providing application high availability in highly-available virtual machine environments |
US20120144038A1 (en) * | 2010-12-07 | 2012-06-07 | Cisco Technology, Inc. | System and method for allocating resources based on events in a network environment |
CN102495769A (en) * | 2010-09-30 | 2012-06-13 | 微软公司 | Dynamic virtual device failure recovery |
US8397240B2 (en) | 2008-01-16 | 2013-03-12 | Dell Products, Lp | Method to dynamically provision additional computer resources to handle peak database workloads |
US8413144B1 (en) * | 2010-07-30 | 2013-04-02 | Symantec Corporation | Providing application-aware high availability of virtual machines |
US20130159514A1 (en) * | 2010-08-16 | 2013-06-20 | Fujitsu Limited | Information processing apparatus and remote maintenance method |
US20130227335A1 (en) * | 2012-02-29 | 2013-08-29 | Steven Charles Dake | Recovery escalation of cloud deployments |
US20130275966A1 (en) * | 2012-04-12 | 2013-10-17 | International Business Machines Corporation | Providing application based monitoring and recovery for a hypervisor of an ha cluster |
US8667399B1 (en) | 2010-12-29 | 2014-03-04 | Amazon Technologies, Inc. | Cost tracking for virtual control planes |
US8667495B1 (en) * | 2010-12-29 | 2014-03-04 | Amazon Technologies, Inc. | Virtual resource provider with virtual control planes |
US20140344805A1 (en) * | 2013-05-16 | 2014-11-20 | Vmware, Inc. | Managing Availability of Virtual Machines in Cloud Computing Services |
US8954978B1 (en) | 2010-12-29 | 2015-02-10 | Amazon Technologies, Inc. | Reputation-based mediation of virtual control planes |
EP2726987A4 (en) * | 2011-11-04 | 2016-05-18 | Hewlett Packard Development Co | Fault processing in a system |
WO2016168035A1 (en) * | 2015-04-17 | 2016-10-20 | Microsoft Technology Licensing, Llc | Locally restoring functionality at acceleration components |
US9792154B2 (en) | 2015-04-17 | 2017-10-17 | Microsoft Technology Licensing, Llc | Data processing system having a hardware acceleration plane and a software plane |
US9836342B1 (en) * | 2014-09-05 | 2017-12-05 | VCE IP Holding Company LLC | Application alerting system and method for a computing infrastructure |
US10044678B2 (en) | 2011-08-31 | 2018-08-07 | At&T Intellectual Property I, L.P. | Methods and apparatus to configure virtual private mobile networks with virtual private networks |
US10095590B2 (en) * | 2015-05-06 | 2018-10-09 | Stratus Technologies, Inc | Controlling the operating state of a fault-tolerant computer system |
US10198294B2 (en) | 2015-04-17 | 2019-02-05 | Microsoft Licensing Technology, LLC | Handling tenant requests in a system that uses hardware acceleration components |
US10216555B2 (en) | 2015-06-26 | 2019-02-26 | Microsoft Technology Licensing, Llc | Partially reconfiguring acceleration components |
US10270709B2 (en) | 2015-06-26 | 2019-04-23 | Microsoft Technology Licensing, Llc | Allocating acceleration component functionality for supporting services |
US10296392B2 (en) | 2015-04-17 | 2019-05-21 | Microsoft Technology Licensing, Llc | Implementing a multi-component service using plural hardware acceleration components |
US10511478B2 (en) | 2015-04-17 | 2019-12-17 | Microsoft Technology Licensing, Llc | Changing between different roles at acceleration components |
CN112035219A (en) * | 2020-09-10 | 2020-12-04 | 深信服科技股份有限公司 | Virtual machine data access method, device, equipment and storage medium |
US20220417085A1 (en) * | 2010-06-07 | 2022-12-29 | Avago Technologies International Sales Pte. Limited | Advanced link tracking for virtual cluster switching |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9946982B2 (en) * | 2007-02-28 | 2018-04-17 | Red Hat, Inc. | Web-based support subscriptions |
US8209684B2 (en) * | 2007-07-20 | 2012-06-26 | Eg Innovations Pte. Ltd. | Monitoring system for virtual application environments |
US8135985B2 (en) * | 2009-06-17 | 2012-03-13 | International Business Machines Corporation | High availability support for virtual machines |
US9043454B2 (en) * | 2009-08-26 | 2015-05-26 | Red Hat Israel, Ltd. | Auto suspense of virtual machine on client disconnection |
US9329947B2 (en) * | 2010-06-22 | 2016-05-03 | Red Hat Israel, Ltd. | Resuming a paused virtual machine without restarting the virtual machine |
US9069622B2 (en) | 2010-09-30 | 2015-06-30 | Microsoft Technology Licensing, Llc | Techniques for load balancing GPU enabled virtual machines |
US9026862B2 (en) * | 2010-12-02 | 2015-05-05 | Robert W. Dreyfoos | Performance monitoring for applications without explicit instrumentation |
US20130232254A1 (en) * | 2012-03-02 | 2013-09-05 | Computenext Inc. | Cloud resource utilization management |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4674038A (en) * | 1984-12-28 | 1987-06-16 | International Business Machines Corporation | Recovery of guest virtual machines after failure of a host real machine |
US5437033A (en) * | 1990-11-16 | 1995-07-25 | Hitachi, Ltd. | System for recovery from a virtual machine monitor failure with a continuous guest dispatched to a nonguest mode |
US5805790A (en) * | 1995-03-23 | 1998-09-08 | Hitachi, Ltd. | Fault recovery method and apparatus |
US20030167421A1 (en) * | 2002-03-01 | 2003-09-04 | Klemm Reinhard P. | Automatic failure detection and recovery of applications |
US6625751B1 (en) * | 1999-08-11 | 2003-09-23 | Sun Microsystems, Inc. | Software fault tolerant computer system |
US6691250B1 (en) * | 2000-06-29 | 2004-02-10 | Cisco Technology, Inc. | Fault handling process for enabling recovery, diagnosis, and self-testing of computer systems |
US6728896B1 (en) * | 2000-08-31 | 2004-04-27 | Unisys Corporation | Failover method of a simulated operating system in a clustered computing environment |
US6947957B1 (en) * | 2002-06-20 | 2005-09-20 | Unisys Corporation | Proactive clustered database management |
US7058629B1 (en) * | 2001-02-28 | 2006-06-06 | Oracle International Corporation | System and method for detecting termination of an application instance using locks |
US7206836B2 (en) * | 2002-09-23 | 2007-04-17 | Sun Microsystems, Inc. | System and method for reforming a distributed data system cluster after temporary node failures or restarts |
US20070094659A1 (en) * | 2005-07-18 | 2007-04-26 | Dell Products L.P. | System and method for recovering from a failure of a virtual machine |
US20070174658A1 (en) * | 2005-11-29 | 2007-07-26 | Yoshifumi Takamoto | Failure recovery method |
US7409577B2 (en) * | 2001-05-25 | 2008-08-05 | Neverfail Group Limited | Fault-tolerant networks |
-
2007
- 2007-06-06 US US11/759,099 patent/US7797587B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4674038A (en) * | 1984-12-28 | 1987-06-16 | International Business Machines Corporation | Recovery of guest virtual machines after failure of a host real machine |
US5437033A (en) * | 1990-11-16 | 1995-07-25 | Hitachi, Ltd. | System for recovery from a virtual machine monitor failure with a continuous guest dispatched to a nonguest mode |
US5805790A (en) * | 1995-03-23 | 1998-09-08 | Hitachi, Ltd. | Fault recovery method and apparatus |
US6625751B1 (en) * | 1999-08-11 | 2003-09-23 | Sun Microsystems, Inc. | Software fault tolerant computer system |
US6691250B1 (en) * | 2000-06-29 | 2004-02-10 | Cisco Technology, Inc. | Fault handling process for enabling recovery, diagnosis, and self-testing of computer systems |
US6728896B1 (en) * | 2000-08-31 | 2004-04-27 | Unisys Corporation | Failover method of a simulated operating system in a clustered computing environment |
US7058629B1 (en) * | 2001-02-28 | 2006-06-06 | Oracle International Corporation | System and method for detecting termination of an application instance using locks |
US7409577B2 (en) * | 2001-05-25 | 2008-08-05 | Neverfail Group Limited | Fault-tolerant networks |
US20030167421A1 (en) * | 2002-03-01 | 2003-09-04 | Klemm Reinhard P. | Automatic failure detection and recovery of applications |
US7243267B2 (en) * | 2002-03-01 | 2007-07-10 | Avaya Technology Llc | Automatic failure detection and recovery of applications |
US6947957B1 (en) * | 2002-06-20 | 2005-09-20 | Unisys Corporation | Proactive clustered database management |
US7206836B2 (en) * | 2002-09-23 | 2007-04-17 | Sun Microsystems, Inc. | System and method for reforming a distributed data system cluster after temporary node failures or restarts |
US20070094659A1 (en) * | 2005-07-18 | 2007-04-26 | Dell Products L.P. | System and method for recovering from a failure of a virtual machine |
US20070174658A1 (en) * | 2005-11-29 | 2007-07-26 | Yoshifumi Takamoto | Failure recovery method |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8397240B2 (en) | 2008-01-16 | 2013-03-12 | Dell Products, Lp | Method to dynamically provision additional computer resources to handle peak database workloads |
US9477570B2 (en) * | 2008-08-26 | 2016-10-25 | Red Hat, Inc. | Monitoring software provisioning |
US20100058307A1 (en) * | 2008-08-26 | 2010-03-04 | Dehaan Michael Paul | Methods and systems for monitoring software provisioning |
US20110041126A1 (en) * | 2009-08-13 | 2011-02-17 | Levy Roger P | Managing workloads in a virtual computing environment |
US20110154101A1 (en) * | 2009-12-22 | 2011-06-23 | At&T Intellectual Property I, L.P. | Infrastructure for rapid service deployment |
US8566653B2 (en) * | 2009-12-22 | 2013-10-22 | At&T Intellectual Property I, L.P. | Infrastructure for rapid service deployment |
US8122282B2 (en) * | 2010-03-12 | 2012-02-21 | International Business Machines Corporation | Starting virtual instances within a cloud computing environment |
JP2013522709A (en) * | 2010-03-12 | 2013-06-13 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Launching virtual instances within a cloud computing environment |
GB2491235B (en) * | 2010-03-12 | 2017-07-19 | Ibm | Starting virtual instances within a cloud computing environment |
CN102792277B (en) * | 2010-03-12 | 2015-09-30 | 国际商业机器公司 | The method and system of virtual instance is started in cloud computing environment |
CN102792277A (en) * | 2010-03-12 | 2012-11-21 | 国际商业机器公司 | Starting virtual instances within a cloud computing environment |
US20110225467A1 (en) * | 2010-03-12 | 2011-09-15 | International Business Machines Corporation | Starting virtual instances within a cloud computing environment |
US20220417085A1 (en) * | 2010-06-07 | 2022-12-29 | Avago Technologies International Sales Pte. Limited | Advanced link tracking for virtual cluster switching |
US11757705B2 (en) * | 2010-06-07 | 2023-09-12 | Avago Technologies International Sales Pte. Limited | Advanced link tracking for virtual cluster switching |
US20120030670A1 (en) * | 2010-07-30 | 2012-02-02 | Jog Rohit Vijay | Providing Application High Availability in Highly-Available Virtual Machine Environments |
CN103201724A (en) * | 2010-07-30 | 2013-07-10 | 赛门铁克公司 | Providing application high availability in highly-available virtual machine environments |
US8424000B2 (en) * | 2010-07-30 | 2013-04-16 | Symantec Corporation | Providing application high availability in highly-available virtual machine environments |
JP2013535745A (en) * | 2010-07-30 | 2013-09-12 | シマンテック コーポレーション | Providing high availability for applications in highly available virtual machine environments |
US8413144B1 (en) * | 2010-07-30 | 2013-04-02 | Symantec Corporation | Providing application-aware high availability of virtual machines |
WO2012016175A1 (en) * | 2010-07-30 | 2012-02-02 | Symantec Corporation | Providing application high availability in highly-available virtual machine environments |
US20130159514A1 (en) * | 2010-08-16 | 2013-06-20 | Fujitsu Limited | Information processing apparatus and remote maintenance method |
CN102495769A (en) * | 2010-09-30 | 2012-06-13 | 微软公司 | Dynamic virtual device failure recovery |
US8788654B2 (en) * | 2010-12-07 | 2014-07-22 | Cisco Technology, Inc. | System and method for allocating resources based on events in a network environment |
US20120144038A1 (en) * | 2010-12-07 | 2012-06-07 | Cisco Technology, Inc. | System and method for allocating resources based on events in a network environment |
CN102546738A (en) * | 2010-12-07 | 2012-07-04 | 思科技术公司 | System and method for allocating resources based on events in a network environment |
US9882773B2 (en) | 2010-12-29 | 2018-01-30 | Amazon Technologies, Inc. | Virtual resource provider with virtual control planes |
US8667495B1 (en) * | 2010-12-29 | 2014-03-04 | Amazon Technologies, Inc. | Virtual resource provider with virtual control planes |
US8667399B1 (en) | 2010-12-29 | 2014-03-04 | Amazon Technologies, Inc. | Cost tracking for virtual control planes |
US8954978B1 (en) | 2010-12-29 | 2015-02-10 | Amazon Technologies, Inc. | Reputation-based mediation of virtual control planes |
US9553774B2 (en) | 2010-12-29 | 2017-01-24 | Amazon Technologies, Inc. | Cost tracking for virtual control planes |
US10033659B2 (en) | 2010-12-29 | 2018-07-24 | Amazon Technologies, Inc. | Reputation-based mediation of virtual control planes |
US10044678B2 (en) | 2011-08-31 | 2018-08-07 | At&T Intellectual Property I, L.P. | Methods and apparatus to configure virtual private mobile networks with virtual private networks |
EP2726987A4 (en) * | 2011-11-04 | 2016-05-18 | Hewlett Packard Development Co | Fault processing in a system |
US9081750B2 (en) * | 2012-02-29 | 2015-07-14 | Red Hat, Inc. | Recovery escalation of cloud deployments |
US20130227335A1 (en) * | 2012-02-29 | 2013-08-29 | Steven Charles Dake | Recovery escalation of cloud deployments |
CN104205060A (en) * | 2012-04-12 | 2014-12-10 | 国际商业机器公司 | Providing application based monitoring and recovery for a hypervisor of an ha cluster |
DE112013002014B4 (en) | 2012-04-12 | 2019-08-14 | International Business Machines Corporation | Provide application-based monitoring and recovery for an HA cluster hypervisor |
US20130275966A1 (en) * | 2012-04-12 | 2013-10-17 | International Business Machines Corporation | Providing application based monitoring and recovery for a hypervisor of an ha cluster |
US20130275805A1 (en) * | 2012-04-12 | 2013-10-17 | International Business Machines Corporation | Providing application based monitoring and recovery for a hypervisor of an ha cluster |
US9110867B2 (en) * | 2012-04-12 | 2015-08-18 | International Business Machines Corporation | Providing application based monitoring and recovery for a hypervisor of an HA cluster |
US9183034B2 (en) * | 2013-05-16 | 2015-11-10 | Vmware, Inc. | Managing availability of virtual machines in cloud computing services |
US20140344805A1 (en) * | 2013-05-16 | 2014-11-20 | Vmware, Inc. | Managing Availability of Virtual Machines in Cloud Computing Services |
US9836342B1 (en) * | 2014-09-05 | 2017-12-05 | VCE IP Holding Company LLC | Application alerting system and method for a computing infrastructure |
US9983938B2 (en) | 2015-04-17 | 2018-05-29 | Microsoft Technology Licensing, Llc | Locally restoring functionality at acceleration components |
US10198294B2 (en) | 2015-04-17 | 2019-02-05 | Microsoft Licensing Technology, LLC | Handling tenant requests in a system that uses hardware acceleration components |
US10296392B2 (en) | 2015-04-17 | 2019-05-21 | Microsoft Technology Licensing, Llc | Implementing a multi-component service using plural hardware acceleration components |
US10511478B2 (en) | 2015-04-17 | 2019-12-17 | Microsoft Technology Licensing, Llc | Changing between different roles at acceleration components |
US11010198B2 (en) | 2015-04-17 | 2021-05-18 | Microsoft Technology Licensing, Llc | Data processing system having a hardware acceleration plane and a software plane |
WO2016168035A1 (en) * | 2015-04-17 | 2016-10-20 | Microsoft Technology Licensing, Llc | Locally restoring functionality at acceleration components |
US9792154B2 (en) | 2015-04-17 | 2017-10-17 | Microsoft Technology Licensing, Llc | Data processing system having a hardware acceleration plane and a software plane |
US10095590B2 (en) * | 2015-05-06 | 2018-10-09 | Stratus Technologies, Inc | Controlling the operating state of a fault-tolerant computer system |
US10216555B2 (en) | 2015-06-26 | 2019-02-26 | Microsoft Technology Licensing, Llc | Partially reconfiguring acceleration components |
US10270709B2 (en) | 2015-06-26 | 2019-04-23 | Microsoft Technology Licensing, Llc | Allocating acceleration component functionality for supporting services |
CN112035219A (en) * | 2020-09-10 | 2020-12-04 | 深信服科技股份有限公司 | Virtual machine data access method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US7797587B2 (en) | 2010-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7797587B2 (en) | System and method of recovering from failures in a virtual machine | |
US20230011241A1 (en) | Hypervisor remedial action for a virtual machine in response to an error message from the virtual machine | |
US11080100B2 (en) | Load balancing and fault tolerant service in a distributed data system | |
US10481987B2 (en) | Storage policy-based automation of protection for disaster recovery | |
US10542049B2 (en) | Mechanism for providing external access to a secured networked virtualization environment | |
JP5111340B2 (en) | Method for monitoring apparatus constituting information processing system, information processing apparatus, and information processing system | |
US20150067147A1 (en) | Group server performance correction via actions to server subset | |
US10452469B2 (en) | Server performance correction using remote server actions | |
EP2598993A1 (en) | Providing application high availability in highly-available virtual machine environments | |
US10216601B2 (en) | Agent dynamic service | |
JP2009288836A (en) | System failure recovery method of virtual server, and its system | |
US10474491B2 (en) | Method and apparatus for managing cloud server in cloud environment | |
US10223407B2 (en) | Asynchronous processing time metrics | |
US10628170B1 (en) | System and method for device deployment | |
US20150019725A1 (en) | Server restart management via stability time | |
US9171024B1 (en) | Method and apparatus for facilitating application recovery using configuration information | |
US10936425B2 (en) | Method of tracking and analyzing data integrity issues by leveraging cloud services | |
US8914680B2 (en) | Resolution of system hang due to filesystem corruption | |
EP3629180B1 (en) | Method and system for reliably restoring virtual machines | |
US10749777B2 (en) | Computer system, server machine, program, and failure detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DELL PRODUCTS L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VASUDEVAN, BHARATH;SANKARAN, ANANDA;SINGH, SUMANKUMAR;REEL/FRAME:019414/0339;SIGNING DATES FROM 20070605 TO 20070606 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VASUDEVAN, BHARATH;SANKARAN, ANANDA;SINGH, SUMANKUMAR;SIGNING DATES FROM 20070605 TO 20070606;REEL/FRAME:019414/0339 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001 Effective date: 20131029 Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348 Effective date: 20131029 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261 Effective date: 20131029 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261 Effective date: 20131029 Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FI Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348 Effective date: 20131029 Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TE Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001 Effective date: 20131029 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: COMPELLANT TECHNOLOGIES, INC., MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: DELL INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: SECUREWORKS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: DELL MARKETING L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: FORCE10 NETWORKS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: PEROT SYSTEMS CORPORATION, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: APPASSURE SOFTWARE, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: CREDANT TECHNOLOGIES, INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: DELL SOFTWARE INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216 Effective date: 20160907 |
|
AS | Assignment |
Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: DELL SOFTWARE INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: DELL MARKETING L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: DELL INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: PEROT SYSTEMS CORPORATION, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: FORCE10 NETWORKS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: APPASSURE SOFTWARE, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: CREDANT TECHNOLOGIES, INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: SECUREWORKS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001 Effective date: 20160907 Owner name: DELL SOFTWARE INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: DELL MARKETING L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: SECUREWORKS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: APPASSURE SOFTWARE, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: FORCE10 NETWORKS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: CREDANT TECHNOLOGIES, INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: DELL INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: PEROT SYSTEMS CORPORATION, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618 Effective date: 20160907 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001 Effective date: 20160907 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001 Effective date: 20160907 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001 Effective date: 20160907 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001 Effective date: 20160907 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: SCALEIO LLC, MASSACHUSETTS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: MOZY, INC., WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: MAGINATICS LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: FORCE10 NETWORKS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL SYSTEMS CORPORATION, TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL SOFTWARE INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL MARKETING L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL INTERNATIONAL, L.L.C., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: CREDANT TECHNOLOGIES, INC., TEXAS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: AVENTAIL LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001 Effective date: 20211101 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: SCALEIO LLC, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL INTERNATIONAL L.L.C., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001 Effective date: 20220329 |
|
AS | Assignment |
Owner name: SCALEIO LLC, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL INTERNATIONAL L.L.C., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001 Effective date: 20220329 |