Forum
Welcome, Guest
Username: Password: Remember me
This is the optional category header for the Suggestion Box.
  • Page:
  • 1
  • 2

TOPIC:

Crash simulation 10 years 6 months ago #125

In your screenshot is looks like you are using ip 10.10.10.1 which is different than the status. 10.10.10.3. In xencenter, the iscsi target IP should be 10.10.10.3 based on the information in your post.

Did ytou check LVM per previous post? it is Crucial that:

1- the LVM cache be cleared
2- LVM cache is disabled
3- LVM filters are updated so the host ONLY reads LVM metadata from the iscsi device (meaning block the local backing device (ex. /dev/sdb) and also block the drbd device (ex. /dev/drbd1).

If these three steps are not precisely followed you may end up in a situation where the host is reading LVM meta data from the local device, which is very bad as it would bypass DRBD. There would be no replication and no backup. Also, you would experience the VDI not available error. Below is a snippet from the documentation which describes the necessary steps for properly initializing LVM:

These steps should be completed on both hosts:

Update LVM filters
LVM filters must be updated to prevent VG/LV metadata from being read from both the backing block device and the DRBD device. VG/LV data must ONLY be read from /dev/iscsi. This step is mandatory for proper operation.
- Edit /etc/lvm/lvm.conf and update filter to look something like this to reject reading LVM headers locally.
vi /etc/lvm/lvm.conf

- Update filter to (restrict local backing device and drbd device – adjust to your environment)
** Important – LVM Headers for iSCSI-HA storage must only be read from /dev/iscsi **
“filter = [ "r|/dev/xvd.|", "r|/dev/VG_Xen.*/*|", "r|/dev/cciss/c0d1|", "r|/dev/drbd.*|"]”

- Set -> “write_cache_state=0”
- When done – erase the LVM cache to ensure cached data is not read by LVM.
rm –f /etc/lvm/cache/.cache && vgscan

Please Log in or Create an account to join the conversation.

Crash simulation 10 years 6 months ago #128

The reconnection was wrong, I reattach again to 10.10.10.3 (in the beginning it was ok, but testing, and testing...)

The LVM related tunning was followed step-by-step.
My filter is:

filter = [ "r|/dev/xvd.|", "r|/dev/VG_Xen.*/*|", "r|/dev/md0|", "r|/dev/drbd.*|"]

and lvm cache was yet erased (now again).
Same result.

Is normal 2 iSCSI pbd as result of xe pbd-list sr-uuid=....?

Please Log in or Create an account to join the conversation.

Crash simulation 10 years 6 months ago #129

I am not sertain whether it is normal to show 2 PBD for an iSCSI SR. This could be the case if you have 2 LUNs.

Please Log in or Create an account to join the conversation.

Crash simulation 10 years 6 months ago #130

It's a fresh xenserver with ha-lizard+iscsi 2 node pool

Please Log in or Create an account to join the conversation.

Crash simulation 10 years 6 months ago #131

We tried this on our development environment. 2 PBDs for the SR seem correct as there is one for each host in the pool. Here is a sample output from a setup using the 2-node howto. There is one PBD for each host (pool member)

[root@dev1 ha-lizard]# xe pbd-list sr-uuid=4ad16908-75cf-044b-d5db-457c62bdafd1
uuid ( RO) : 636cad5e-f409-364a-1535-5d6a154b5263
host-uuid ( RO): 1584d3e5-85b0-4a68-962a-e2ab6b289201
sr-uuid ( RO): 4ad16908-75cf-044b-d5db-457c62bdafd1
device-config (MRO): port: 3260; SCSIid: 10000000025; target: 10.10.10.3; targetIQN: iqn.2013-05.com.yourdomain:yourhost
currently-attached ( RO): true


uuid ( RO) : 4f4778eb-2fd6-e277-8056-5269f98931c0
host-uuid ( RO): 60f75e2f-9d60-4b9b-bac9-07a78929eb52
sr-uuid ( RO): 4ad16908-75cf-044b-d5db-457c62bdafd1
device-config (MRO): port: 3260; SCSIid: 10000000025; target: 10.10.10.3; targetIQN: iqn.2013-05.com.yourdomain:yourhost
currently-attached ( RO): true

Please Log in or Create an account to join the conversation.

Crash simulation 10 years 6 months ago #132

Now everithing works fine agian without any change!

Please Log in or Create an account to join the conversation.

  • Page:
  • 1
  • 2