Forum
Welcome, Guest
Username: Password: Remember me
This is the optional category header for the Suggestion Box.
  • Page:
  • 1

TOPIC:

No bonding, BUT using DRBD multipathing and ISCSI multiple virtual ip 4 years 2 months ago #1958

  • Mircea Popescu
  • Mircea Popescu's Avatar Topic Author
  • Offline
  • Posts: 1
Hi All,

Just installed HA-LIZARD on XCP-NG an it went fine. I'd like to optimize a bit, though.
I have 2 nodes with 6x 1gb nics each, 4 of which (per node) are dedicated to drbd replication and iscsi traffic. Going by the default installation process, including the youtube video that describes it, I've configured a bond for all 4 dedicated storage nics. IMHO this is far from optimal so I'd like to ask if it is possible to do the following.

Main reason is that iscsi protocol is not taking advantage of the entire bandwidth if running on top of bond interfaces ... this is a statement.
Second reason, is DRBD capable of using multiple ip addresses / subnets for replication ? ... this is a question.

Ideally I'd go with the following scenario. In stead of 4 nic bond, I'd do individual nics, with no bonding, and configure the replication between nodes on 4 ip addresses from 4 different subnets in /etc/drbd.conf. The config would be like this:
on v2 {
device /dev/drbd1;
disk /dev/md0;
address x.y.100.11:7789;
address x.y.101.11:7789;
address x.y.102.11:7789;
address x.y.103.11:7789;
meta-disk internal;
}
on v1 {
device /dev/drbd1;
disk /dev/md0;
address x.y.100.12:7789;
address x.y.101.12:7789;
address x.y.102.12:7789;
address x.y.103.12:7789;
meta-disk internal;
}

Second part, the /etc/iscsi-ha/iscsi-ha.conf

in stead of:
...
DRBD_RESOURCES=iscsi1
ISCSI_TARGET_SERVICE=tgtd
DRBD_VIRTUAL_IP=x.y.100.10
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xapi2
...
we could have:
...
DRBD_RESOURCES=iscsi1
ISCSI_TARGET_SERVICE=tgtd
DRBD_VIRTUAL_IP=x.y.100.10
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xapi2
DRBD_VIRTUAL_IP=x.y.101.10
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xapi3
DRBD_VIRTUAL_IP=x.y.102.10
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xapi4
DRBD_VIRTUAL_IP=x.y.102.10
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xapi5
...

Does this make any sense ? I mean, it makes sense theoretically, but would DRBD actually use those 4 paths and will iscsi-ha actually present the 4 ip addresses as targets ?

A simplified version of this will be to use 2 bonds, one for replication and one for iscsi traffic, but still it won't be so satisfactory although this will most probably work.

Regards,

Please Log in or Create an account to join the conversation.

No bonding, BUT using DRBD multipathing and ISCSI multiple virtual ip 4 years 2 months ago #1959

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 722
If you are concerned about throughput on the replication interface, the simplest solution would be to use a 10GB Ethernet link for replication.

In answer to your questions.
- DRBD is not designed to load balance across multiple networks. It relies on the underlying network layer to handle that. But, you can come close to what you described by creating multiple DRBD devices. Each one would use a different port number (ex. 7789, 7790, 7791, etc..). Each one would also require it's own block device (or partition). In a situation where you have an active/active bond for replication, I am fairly certain that you would would use all of the interfaces since the flows are unique with respect to port number.
- iSCSI-HA cannot take multiple instances of the floating IP and interface parameters. You you declare it multiple times, but the last one is the only one that will get used. However, it can handle multiple storage repositories across the same interface. So, in the example above, multiple DRBD devices can be configured. There are many users doing this already.

If you decide to take this approach, you would also be required to treat each DRBD device as an iSCSI storage repository, which would complicate the management of the cluster.. A faster replication link is probably your best option to achieve better throughput.

Please Log in or Create an account to join the conversation.

  • Page:
  • 1