Forum
Welcome, Guest
Username: Password: Remember me
  • Page:
  • 1

TOPIC:

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 9 months 2 weeks ago #2266

Hello everybody,

we are using since four years two separate clusters successful with HA-Lizard and HA-iSCSI. They are equipped with Standard SAS 6GB HHDs.

Now we have to replace one of them and we want to use instead of SAS HDDs for performance reasons enterprise grade SSDs.

We plan to install a RAID5 system with 6x 1,92TB SSDs, including one hot spare drive.

Would you share your experience / knowledge about such a approach.

Best regards
Andreas

Please Log in or Create an account to join the conversation.

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 7 months 1 day ago #2342

So as a result after more than two months, nobody is using HA-Lizard with a "all flash" setup. :-(

Please Log in or Create an account to join the conversation.

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 7 months 1 day ago #2343

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 696
HI Andreas,
There are no issues with using SSD based storage. It is transparent to the application and should give you a performance boost. You should consider a 10Gbps ethernet link for replication (if you are not doing that already).

Also, as an aside, our default DRBD conf file has been rather slow in performing initial sync. Updated conf file is below that will appear in our next installer in a week or so. For 10Gbps ethernet, change c-max-rate to 1500M
global
{
    usage-count no;
} 

common
{
    disk
    {
        c-max-rate 150M;
        c-fill-target 1M;
    }

    net
    {
        max-buffers 60000; 
        after-sb-0pri discard-zero-changes; 
        after-sb-1pri consensus; cram-hmac-alg sha1; 
        shared-secret PUTyourSECREThere; 
    }
    
    handlers
    {
        split-brain "/etc/iscsi-ha/scripts/drbd-split-brain-alert";
    }
} 

resource iscsi1
{ 
    protocol C; 

    on dev1
    { 
        device /dev/drbd1; 
        disk /dev/sda3; 
        address 10.10.10.1:7789; 
        meta-disk internal; 
    } 

    on dev2
    { 
        device /dev/drbd1; 
        disk /dev/sda3; 
        address 10.10.10.2:7789; 
        meta-disk internal; 
    } 
}

Please Log in or Create an account to join the conversation.

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 6 months 4 weeks ago #2350

Dear Salvatore,

thank you for your reply and suggestion. Currently we have a bonded two 1Gbps NICs for replication. should we consider in the new setup also a bonded link, i.e. two 10 Gbps NICs?

It seems to be somewhat overpowered...

BR Andreas

Please Log in or Create an account to join the conversation.

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 6 months 4 weeks ago #2352

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 696
Hi Andreas, you should consider 10GB only if your disk throughput is faster than 1Gbps.

For example, if your disks are capable of 300MBytes/s throughput, that equates to 2.6G Bits/s which will saturate the 1GB link.
Also consider that all read/write (from the host's perspective) is happening on the primary node only. The replication link performance also affects the rate at which data is replicated to the peer host. However, when data is being replicated to the secondary node, the primary will not commit the write operation to its disk until the secondary has committed its write. This would result in poor performance for the cluster which could become disk-bound.

So, chances are that you will see a performance benefit with 10GB ethernet, but it really has more to do with your underlying disk throughput.

Please Log in or Create an account to join the conversation.

New install of HA-Lizard HA-iSCSI - all SSD - any experience/knowledge to share? 2 months 2 weeks ago #2501

Dear Salvatore,
as we tend to follow again the "iSCSI-ha 2-node cluster howto" with our next setup I am struggling a bit with the bonded network for replication and iSCSI interface. Two 10GB network cards bonded seems to be somewhat overpowered, but on the other hand to use only one link will not have any redundancy. (SPoF risk).

Any thoughts about this "conflict"?

Please Log in or Create an account to join the conversation.

  • Page:
  • 1