Forum
Welcome, Guest
Username: Password: Remember me
This is the optional category header for the Suggestion Box.

TOPIC:

NoSAN HA-ISCSI Thin Provisioning 2 years 4 months ago #2554

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
Hey Nathan,

I agree on your point about performance with NFS vs iSCSI, as my testing did indeed show that IOPS were negatively impacted by the use of NFS (though not to a degree that would particularly impact my specific workload). This is for a small home deployment, were resources are limited, meaning I do not have a SAN to be able to deploy a usual iSCSI target, which made HA-Lizard NoSAN perfect for the 2-node DRBD replicated shared storage. The thing is with XAPI based virt systems (XenServer, XCP-ng, etc), iSCSI storage can only be added into the pool by applying LVM on top (so you end up with LVM over iSCSI) which is thick provisioned.

Basically, my reason for wanting to be able to use HA-Lizard with NFS was to achieve thin-provisioning for my Xen VMs, as there is a limited amount of storage space in my virt nodes, thin-prov would massively help in terms of snapshotting etc.
The performance hit, but more importantly the modifications required to the HA-Lizard code and then of course maintaining this through further releases was why I decided to not go ahead in the end.

Thanks,
Maxwell

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 4 months ago #2555

Ahh I see. Yes the snapshot overhead problem...

I run a server farm in a data centre and use snapshots to live export backups of virtual office VMs and it is rather frustrating having to calculate the overhead to match the largest VM in the pool. There's also the cost factor... Storage needs to be doubled in size as well as physically provisioned twice.

My proposal here is to develop a thin provisioned HA redundant iSCSI LUN. You could choose to run a small form factor single server or for data centres a HA-Lizard pair.

I haven't had a chance to thoroughly investigate viability but my understanding is that iSCSI thin provisioning is rather straight forward when a hypervisor is not part of the stack.

Cheers
Nathan

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 4 months ago #2556

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
Hi,

If you plan to use an iSCSI LUN with a Xen hypervisor in any context, it will need to be thick provisioned VM storage. But if you are just looking for simple storage for HA iSCSI, you should probably look at DRBD (Replication), TG3 (iSCSI Server) along with either Heartbeat or Pacemaker which are HA clustering software. They will handle the monitoring of your hosts and will stop/start the TG3 service when needed in addition to managing the DRBD Primary/Secondary states.

Just one man's opinion :)

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 4 months ago #2557

  • Maxwell Morris
  • Maxwell Morris's Avatar Topic Author
  • Offline
  • Posts: 12
This is of course, if you were looking for hyper-converged data model in your data center. If you have or are planning to have a small SAN, TrueNAS Core does have bultin replication + HA support to the best of my knowledge. So you could achieve highly available iSCSI targets this way
The following user(s) said Thank You: Nathan Scannell

Please Log in or Create an account to join the conversation.

NoSAN HA-ISCSI Thin Provisioning 2 years 4 months ago #2558

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 722
Hi Konos,
Thanks for your comments. Yes, iscsi-ha can be reasonably modified to create a standalone product for building HA iscsi SANs, however, there are ways of handling this already with DRBD/heartbeat recipes. If you are interested in creating such a solution, I can point you to all the areas that would need updating. I unfortunately don't have the time with my current work load to do much more than that.
The following user(s) said Thank You: Nathan Scannell

Please Log in or Create an account to join the conversation.