Forum
Welcome, Guest
Username: Password: Remember me
This is the optional category header for the Suggestion Box.

TOPIC:

NoSAN HA-ISCSI Thin Provisioning 2 years 6 months ago #2530

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
Hi all,

Apologies if this has been already answered, but I could not find so.

As the HA-Lizard NoSAN solution provides an iSCSI target to Xen/XCP (which is not thin provisioned, causing snapshots to take up a good chunk of space), is it possible/easy to integrate either NFS on top of DRBD or NFS on top of iSCSI in conjunction with HA-Lizard?

As right now, I would love to switch to HA-Lizard, but the lack of thin-provisioning is holding me back.

Thanks!

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 6 months ago #2531

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 722
I don't think that would work since XCP/Xenserver would still need to mount the SR as an iSCSI store which comes with the limitation of thick provisioning only.

You could technically create a VM inside the pool that acts as an NFS server which could be mounted as a NFS SR and then your remaining VMs would use that for storage. Not sure about the performance hit that would be introduced though. Also, this makes the NFS storage potentially more fragile since you would be introducing stale mounts, etc in an HA event that would impact live/running VMs backed by NFS.

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 5 months ago #2538

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
Hi Salvatore,

Apologies in my delay, but would it be particularly difficult to replace the iSCSI component of HA-Lizard with an NFS server instead? From what I've been able to see so far, it looks the node master (which is also the DRBD Primary) runs the TGTD service and the Secondary node has it stopped, would it be possible for me to modify a few scripts of the failover system to instead stop/start an NFS server?

Thanks,
Maxwell

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 5 months ago #2539

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 722
Hi Maxwell,
On the surface, this would be pretty easy to do. Can you can try some small code changes (start/stop/track NFS instead of TGTD) to test how it works in a failover scenario. Assuming that works well, there are some things to consider to introduce this into our code:

- Lookout for stale mounts in XCP/Xenserver. thorough testing would be required and and framework wold need to be put in place to ensure that dom0 does not get stuck due to stale mount on a failed NFS target.

- Some CLI tool and status tool updates would be required to add new functionality

- A few config variables would need to be added too.

I don't have the time to work on this with my current workload, but can support you and assist with minor code changes needed to get a proof of concept working.

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 5 months ago #2540

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
Hi Salvatore,

That's great news! I'll start digging through the files, can you give me any general pointers as to where the files for controlling the TGT service are, and I would be more than happy to test.

Out of interest, is there a particular reason you guys decided to go with iSCSI rather than NFS at design inception?

Thanks,,
Maxwell

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 5 months ago #2541

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 722
Maxwell,
One more thing to consider - which may be most important.

When our solution was built around iscsi, we ensured that it would be very resilient to the storage flapping back and forth between the underlying hosts (storage will switchover in a HA event or maintenance event when using manual mode). This works well in an iscsi client/target scenario because the protocol provides validation on writes and allows us to seamlessly move the storage back and forth between hosts. In file based storage, this may not be so easy, and I would be concerned about data integrity in certain crash and failover scenarios. To test integrity through multiple failovers, start a significant file transfer that is being written to a VM that is backed by NFS. Put iscsi-ha into manual mode, then switch primary/secondary roles multiple times while the data transfer in the VM is ongoing. Assuming the transfer completes, md5 the transferred file to check integrity. I would make this the first test to quickly ascertain whether NFS failover can really work without file corruption.

Please Log in or Create an account to join the conversation.