Forum

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /homepages/13/d467848118/htdocs/templates/cleanout/vertex/responsive/responsive_mobile_menu.php on line 158
Welcome, Guest
Username: Password: Remember me

TOPIC:

2 or more iSCSI 9 years 2 months ago #538

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 727
Hi Robert,
You should read this document in order to better understand what the installer is actually doing. Also, in this document there is a section on how to initialize DRBD for the first time. That is what is missing for the iscsi2 resource in your test environment.

Please Log in or Create an account to join the conversation.

2 or more iSCSI 9 years 2 months ago #541

Thank you.

This is what worked for me for the config files attached above:
- initialize the Disks
dd if=/dev/zero bs=1M count=1 of=/dev/sdb
drbdadm create-md iscsi2

service drbd restart

drbdadm -- --overwrite-data-of-peer primary iscsi2
rm -f /etc/lvm/cache/.cache && vgscan

init 6

Please Log in or Create an account to join the conversation.

2 or more iSCSI 9 years 2 months ago #569

I'm gonna add to this thread a little bit.

I'm currently upgrading my cluster to include 3 servers instead of two. It's going to be be a little complicated.

I am going to have two drbd resources, but only one resource(SAS drives) is going to be replicated across all servers, and the other is going to be an SSD resource that is only available on two of the servers.

So basically, HA for the SAS will work for three servers but SSD will work for two. Does this seem possible to setup with a little bit of modifications to the config files listed?

So then let me also ask. What does it take to add a third server or even fourth server to the cluster to do a smooth migration of VMs?

Please Log in or Create an account to join the conversation.

2 or more iSCSI 9 years 2 months ago #571

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 727
Adding a third or fourth server to the pool is fairly easy since the storage is exposed through iSCSI. Assuming that your replication links are directly attached to the 2 current servers. Disconnect the replication links (one at a time) and run them through an Ethernet switch (basically put a switch in the middle of the replication network). Any servers you want to add to the pool would connect their replication interface to the same switch. Once the new server is introduced to the pool it should automatically connect to the storage.

This should be done for transition of VMs only as it introduces a point of failure (switch) into the replication network. Not recommended for a production environment.

Please Log in or Create an account to join the conversation.

2 or more iSCSI 9 years 2 months ago #572

sc wrote: Adding a third or fourth server to the pool is fairly easy since the storage is exposed through iSCSI. Assuming that your replication links are directly attached to the 2 current servers. Disconnect the replication links (one at a time) and run them through an Ethernet switch (basically put a switch in the middle of the replication network). Any servers you want to add to the pool would connect their replication interface to the same switch. Once the new server is introduced to the pool it should automatically connect to the storage.

This should be done for transition of VMs only as it introduces a point of failure (switch) into the replication network. Not recommended for a production environment.


I was thinking more about creating a direct link 10G mesh between the servers. I believe that would be perfectly fine, except my only concern would be how would the heartbeat be managed.

Please Log in or Create an account to join the conversation.

2 or more iSCSI 9 years 2 months ago #573

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 727
Hi Bob,
Heartbeat should not be affected as the management links are used for all HA checks. So, assuming your replication/storage network is separate from the management network there is no issue.

However, the scenario of iscsi-HA storage in a pool with more than 2 servers is still a bit dangerous. Here's why:

Failover logic used to expose the storage on either the Master OR the Slave would not be reliable in a pool with more than 2 hosts. This is because iSCSI-HA follows the master and exposes the storage there. So, in a failover scenario where the slave becomes the new master, iSCSI-HA will expose the storage on the new master.

In a pool with more than 2-hosts, the selection of the new master could be a slave that does NOT have the storage (as in iSCSI-HA running locally). In this case, your VMs would lose access to their disks.

With all that said - we plan on releasing an update to iSCSI-HA in the future that would provide a generic 2-node iSCSI storage cluster solution (not tied to XenServer at all). This solution will be identical to the current iSCSI-HA, Except, recovery will NOT follow the master. Instead, iSCSI-HA will operate in an autonomous mode deciding itself which host should be exposing storage. This solution can be run:
- within 2 x dom0
- with any 2 x domu
- or completely standalone on dedicated HW outside of the pool

No timeframe has been set for this release yet.
The following user(s) said Thank You: Aleksey

Please Log in or Create an account to join the conversation.