We know you have questions. Here are some of the most common questions and answers that we experience every day.
If you have a question not listed below, please use our Quick Contact form on the right side of most pages.
Yes, since XCP-ng is the open-source version of XenServer, we have successful installations in versions 7 and 8.
Yes. HA-Lizard is developed without any additional code or purpose except to perform High Availability and redundancy for Citrix XenServer. WE DO NOT add any tracking, advertising, 3rd party adware, malicious code or malware. Since the software is open source, please ensure you only obtain any release or updates from HA-Lizard.
Yes, it is possible under the following criteria:
With 4 hosts, 3 or more quorum votes are required for a slave to become the new master. Since you have 2 hosts in each location, it is possible that the pool could become evenly split. We handle this situation with an additional vote that can be obtained by the surviving hosts by trying to reach a network point that all 4 hosts should reach during normal operation. So, there are 2 scenarios here in your case:
1- only the master has failed. One of the 3 surviving slaves will become the new pool master and any VMs that were running on the master will get started on the remaining 3 hosts. None of the slave's reboot in this case and all VMs that were running on the slaves are unaffected (they continue to run without interruption). In this scenario, if the master encountered an HW failure, then there is no more logic driving the master. If it rejoins the pool later, it will automatically become demoted to a slave and gracefully join the pool. If the master encountered a network issue, its VMs would still be running. We capture this condition and reboot only the master, in this case, to ensure its VMs are not running, making it safe to start them elsewhere.
2 - your network has become split and you are left with 2 hosts at each location. This is where the additional quorum vote is important. A network point that is normally reachable by all 4 hosts is used to ascertain the nature of a potential network split. In this scenario, assuming the location where the master runs have separated from the network, the other location would recover the pool if the 2 slaves there can reach the pre-configured network point, thus increasing quorum by 1 and recovering the pool. At the failed location, the slave would immediately shut down its VMs and the master would reboot 1 time. If the network point used for a quorum is unreachable by all 4 hosts, then there is no way to intelligently recover the pool, and all hosts would take action to shut down any running VMs to ensure data integrity. The slave that is selected to become the new master would reboot itself. The remaining slaves would not.
One caveat is the topic of selecting a suitable slave for recovering the pool. Normally, we preselect (automatically) which slave can recover the pool, which works fine when all the hosts are in the same location. Given that locations are split, we would need to develop a feature where you can explicitly set the slave that can recover the pool. In order to cover both of the above scenarios, that slave would have to be at Site2 (where the master does not normally run). This would allow for recovery when a master fails or the network becomes split. Please contact HA-Lizard for custom development if this feature is required.
Yes, HA-Lizard and iSCSI-HA provide tools and functionality so that pool upgrades can be performed without any VM downtime.
No. HA-Lizard stores all of its settings in a shared pool database. A simple command line tool can be used on any member of a pool to make changes for the entire pool.
HA-Lizard has NO software dependencies outside what is available on a standard XenServer installation. Meaning, NO additional packages are required.
If you choose to also install iSCSI-HA, then DRBD is required for network storage replication and SCSI target framework (TGT) to expose the storage via iSCSI.
There are two methods to building the cluster:
1- Follow our installation video on our YouTube page. The entire cluster can be built from scratch in < 30 minutes with our automated noSAN installer
2- Manual approach. Follow the detailed instructions provided in our reference design which is available on this website
There are two methods available for creating your noSAN cluster:
1- Register to download HA-Lizard’s noSAN installer. This installer will install and configure all the necessary components and in about a minute. The total time required to create a noSAN cluster is about 10 minutes. A video on how to install is available on our YouTube page.
2- Read our reference design which provides detailed steps for manually installing and configuring the necessary components. Although this approach takes more time, it leaves the installer with a better understanding of how HA-Lizard works.
High availability for the pool and each VM can be managed from within XenCenter.
Configuration parameters for HA-Lizard are managed by a CLI “ha-cfg”.
HA-Lizard |
XenServer HA |
|
Open Source |
Yes |
Yes |
Host High Availability Support |
Yes |
Yes |
VM Watchdog |
Yes |
No |
Support for 2-node HA Pool |
Yes |
No |
Configured Through XenCenter |
Partially |
Yes |
Command Line Interface |
Yes |
Yes- via XE |
Hardware Fencing |
Yes |
No |
Yes. HA-Lizard is free GPL open source.
No. HA-Lizard works with any number of hosts within a pool and provides reliable high availability in a 2-node pool.
HA-Lizard comes in 3 formats for maximum flexibility:
HA-Lizard is open source software available to any person or organization that requires High Availability for Citrix XenServer. We specialize in providing high availability for 2 node pools with internal hard drive storage, but the software can provide high availability for multiple hosts and Storage Area Network (SAN) applications. HA-Lizard comes in 3 formats to allow you maximum flexibility for your requirement.
The iSCSI-HA is an add-on software component that was developed to complement a 2-node HA-Lizard cluster. It allows users to create a 2-node high availability cluster while utilizing local storage (Hard Drives within the server).
The “noSAN” or a “noSAN Cluster” is the term used to define an HA-Lizard 2-node pool that utilizes local storage. This special configuration combines two HA-Lizard software projects. HA-Lizard + iSCSI-HA = noSAN
Typically, a 2-node virtualization cluster is not possible without a minimum of three hosts and an external Storage Area Network (SAN). HA-Lizard’s “noSAN” can accomplish this with just 2 hosts utilizing each of the host's local storage, hence no SAN is required.
The most common use of our software is in a 2-node XenServer environment with local storage. This setup specifically addresses two issues:
1- HA in a 2 node pool is not reliably supported by XenServer- HA-Lizard was designed to provide predictable failover in a 2-node pool
2- local storage is converted to a form of internal Storage Area Network (SAN) such that VMs can be moved live and restarted without any of the limitations of using Direct Attached Storage (DAS). This is done with our iSCSI-HA package and allows users to build 2-node pools without any external SAN while realizing all the functionality (mainly VM agility) of a SAN environment.
Many users are implementing HA-Lizard + iSCSI-HA to realize the scenario described above. This design is well documented in a reference design and how-to that we publish on our site.
In the United States of America (USA).
HA-Lizard is the only reliable high availability solution for 2-node XenServer pools in the industry