Forum

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /homepages/13/d467848118/htdocs/templates/cleanout/vertex/responsive/responsive_mobile_menu.php on line 158
Welcome, Guest
Username: Password: Remember me
This is the optional category header for the Suggestion Box.
  • Page:
  • 1

TOPIC:

nvme0n1 changes to nvme1n1 on master node after install 1 year 7 months ago #2839

  • Olaf Audooren
  • Olaf Audooren's Avatar Topic Author
  • Offline
  • Posts: 6
I have have 2 identical pc's with each 2 nvme storages for the cluser and for a local backup.
When installing only nvme0n1 was selected. After installing nvme0n1 changes to nvme1n1 on the master.
The config of drbd does not reflect the output of blkid or parted -l
I wonder why this behaviuor occurs and if the second nvme is usable for backups.

wkr
Olaf


******** MASTER

****** exceprt from /etc/drbd.conf

resource iscsi1
{
protocol C;

on vates
{
device /dev/drbd1;
disk /dev/nvme1n1p3;
address 10.10.10.1:7789;
meta-disk internal;
}

on inria
{
device /dev/drbd1;
disk /dev/nvme1n1p3;
address 10.10.10.2:7789;
meta-disk internal;
}
}



[09:32 vates etc]# drbdadm status
iscsi1 role:Primary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate

[09:32 vates etc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 465.8G 0 disk
└─nvme0n1p1 259:8 0 465.8G 0 part
nvme1n1 259:1 0 465.8G 0 disk
├─nvme1n1p4 259:5 0 512M 0 part /boot/efi
├─nvme1n1p2 259:3 0 18G 0 part
├─nvme1n1p5 259:6 0 4G 0 part /var/log
├─nvme1n1p3 259:4 0 424.3G 0 part
│ └─drbd1 147:1 0 424.3G 0 disk
├─nvme1n1p1 259:2 0 18G 0 part /
└─nvme1n1p6 259:7 0 1G 0 part [SWAP]
sda 8:0 0 424.3G 0 disk
└─VG_XenStorage--9f26dfb1--d74f--f108--9448--92dbaedabdd8-MGT 253:0 0 4M 0 lvm
[10:15 vates etc]#


********* SLAVE

resource iscsi1
{
protocol C;

on vates
{
device /dev/drbd1;
disk /dev/nvme0n1p3;
address 10.10.10.1:7789;
meta-disk internal;
}

on inria
{
device /dev/drbd1;
disk /dev/nvme0n1p3;
address 10.10.10.2:7789;
meta-disk internal;
}
}


[10:11 inria mnt]# drbdadm status
iscsi1 role:Secondary
disk:UpToDate
peer role:Primary
replication:Established peer-disk:UpToDate

[10:23 inria mnt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 465.8G 0 disk
├─nvme0n1p5 259:6 0 4G 0 part /var/log
├─nvme0n1p3 259:4 0 424.3G 0 part
│ └─drbd1 147:1 0 424.3G 1 disk
├─nvme0n1p1 259:2 0 18G 0 part /
├─nvme0n1p6 259:7 0 1G 0 part [SWAP]
├─nvme0n1p4 259:5 0 512M 0 part /boot/efi
└─nvme0n1p2 259:3 0 18G 0 part
nvme1n1 259:0 0 465.8G 0 disk
└─nvme1n1p1 259:8 0 465.8G 0 part
sda 8:0 0 424.3G 0 disk

Please Log in or Create an account to join the conversation.

nvme0n1 changes to nvme1n1 on master node after install 1 year 2 days ago #3007

Based on the provided information, it seems that the second NVMe storage device (nvme1n1) is visible and accessible on both the master and the slave nodes. However, when configuring DRBD (Distributed Replicated Block Device), you specified /dev/nvme1n1p3 as the disk for replication, which is a partition on the second NVMe device.

The output of lsblk shows that nvme1n1p3 is used by the DRBD device (drbd1), indicating that the second NVMe storage is being utilized for replication. On the master node, nvme1n1p3 is mounted as /dev/drbd1, and on the slave node, it is mounted as /dev/drbd1 as well.

As for the discrepancy between the device names (nvme0n1 changing to nvme1n1), it could be due to the order in which the devices are detected during the boot process. The naming scheme for NVMe devices hdpcgames might vary depending on the system configuration. It's important to note that the naming itself doesn't affect the functionality or usability of the devices.

Regarding the usage of the second NVMe device for backups, you can certainly utilize it for that purpose. Since it is already being used for DRBD replication, you may consider creating a separate partition or file system on the second NVMe device to store your backup data. Alternatively, you can mount the existing partitions (nvme1n1p2, nvme1n1p4, nvme1n1p5, etc.) on the second NVMe device to specific mount points for backup purposes.

Ensure that you have a suitable backup strategy in place to manage the backup process effectively and protect your data.

Please Log in or Create an account to join the conversation.

  • Page:
  • 1