In the previous article, we showed how to setup CentOS7 with ROOT on ZFS. Here we are about to see how you can snapshot and clone an existing installation, then configure GRUB to boot from the clone.
This can be useful in situations like, when upgrading the operating system, when anything can go wrong or simply when you want to have multiple, individual bootable instances of the same O/S.
Steps
====
Assuming that you have a working CentOS7 with ROOT on ZFS, the first step would be to create a snapshot of the current state of the O/S and then after, clone it.
$ zfs snapshot rpool/ROOT/centos@snap-clone1
$ zfs clone rpool/ROOT/centos@snap-clone1 rpool/ROOT/centos-clone1
Set the mount point for the clone ...
$ zfs set mountpoint=/ rpool/ROOT/centos-clone1
Now, we need to create a customised GRUB entry in order to boot from the clone. To do so, copy/paste the following entries into /etc/grub.d/40_custom .
Adapt kernel version and initramfs accordingly. The only change is the one highlighted in red color. That entry instructs GRUB to boot from the clone instead of the normal dataset.
[root@localhost ]# vi /etc/grub.d/40_custom
menuentry 'CentOS Clone1' --class centos --class gnu-linux --class gnu --class os {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod part_gpt
insmod diskfilter
insmod mdraid09
insmod fat
set root='mduuid/cc266933031fb13be368bf24bd0fce41'
linuxefi /vmlinuz-3.10.0-957.1.3.el7.x86_64 root=ZFS=rpool/ROOT/centos-clone1 ro crashkernel=auto
initrdefi /initramfs-3.10.0-957.1.3.el7.x86_64.img
}
Then run grub2-mkconfig to re-generate the GRUB configuration file.
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
This should add the customised entry into the boot list.
Since /boot is located on a different partition (linux raid), no further steps are required.
Each individual clone instance will of course use the same kernel and initramfs.
That's all, now let's reboot and choose "CentOS Clone1" entry in the GRUB boot menu. If successful, that should boot into the cloned version of CentOS. To verify this, simply run the following command. Note the item highlighted in red color.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/centos-clone1 9.3G 3.5G 5.9G 38% /
Now you have two individual versions of CentOS where you can do whatever changes you want, without worrying about destroying the system.
By using the same steps as described above you can create multiple clones and GRUB boot entries, to boot multiple individual versions of the same O/S.
__--==| Yannis' IT Wiki |==--__
Friday 14 December 2018
Sunday 9 December 2018
CentOS7 | ROOT on ZFS in UEFI Mode
In this guide we are going to see how we can "migrate" an existing CentOS7 installation into ROOT on ZFS.
In the past, we have been through a similar setup, but that was in BIOS boot mode. Here we are dealing with pure UEFI mode.We'll setup /boot and /boot/efi both in the same linux raid1 volume, formatted in VFAT. The rest of available space will be dedicated to ZFS.
What's needed
==========
- An existing CentOS7 installation. That can be in ext4,lvm or whatever.
- A bootable livecd with ZFS support. For this tutorial, I'm using a recent version of Ubuntu LTS livecd which has natively support for ZFS.
- 2 hard disks for for Linux RAID and ZFS, in mirror setup for redundancy.
- Minimum 2GB or RAM (4GB or more are recommended as ZFS is a RAM hungry filesystem).
- Some patience.. :-)
Steps
====
Download the latest version of Ubuntu and boot the system from it. Open Terminal and switch to root user..
Install ZFS and some other tools.
- EFI/Boot partitions first (1G|B)
- ZFS partitions
root@ubuntu:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 ONLINE 0 0 0
errors: No known data errors
- Create ZFS datasets.
- Check datasets and verify mountpoints.
root@ubuntu:~# mdadm --create --verbose --metadata=0.90 /dev/md127 --level=mirror --raid-devices=2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3
Check md raid status ...
root@ubuntu:~# mdadm --detail /dev/md127
/dev/md127:
Version : 0.90
Creation Time : Sat Dec 8 19:41:40 2018
Raid Level : raid1
Array Size : 1048512 (1023.94 MiB 1073.68 MB)
Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Sat Dec 8 19:42:35 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 24ebcfb2:4956c4b1:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.18
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
- Format raid1 partition to FAT32/VFAT as per EFI specification...
- Mount EFI partition to the /pool mountpoint ...
root@ubuntu:~# mkdir /pool/boot
root@ubuntu:~# mount /dev/md127 /pool/boot
root@ubuntu:~# df -h | grep pool
rpool/ROOT/centos 7.1G 1.1G 6.1G 15% /pool
rpool/home 7.7G 1.7G 6.1G 22% /pool/home
rpool/home/root 6.1G 256K 6.1G 1% /pool/root
rpool/var/cache 6.1G 128K 6.1G 1% /pool/var/cache
rpool/var/log 6.1G 128K 6.1G 1% /pool/var/log
rpool/var/spool 6.1G 128K 6.1G 1% /pool/var/spool
rpool/var/tmp 6.1G 128K 6.1G 1% /pool/var/tmp
/dev/md127 1022M 4.0K 1022M 1% /pool/boot
- Now that we have datasets mounted in /pool we can rsync existing ROOT CentOS7 system into it.
To do this we must ssh to the source machine and rsync whole ROOT system to the target system (Ubuntu). We presume that the source system is already configured in UEFI mode.
- Verify target (Ubuntu live environment) system's ip address..
$ rsync -avPX --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run / root@10.1.2.201:/pool/
- On the target system, monitor rsync copy process ...
root@ubuntu:~# ls -l /pool
total 23
lrwxrwxrwx. 1 root root 7 Jun 10 10:16 bin -> usr/bin
drwxr-xr-x. 150 root root 302 Dec 8 19:19 etc
drwxr-xr-x. 3 root root 3 Apr 11 2018 home
lrwxrwxrwx. 1 root root 8 Oct 10 18:32 lib -> /usr/lib
lrwxrwxrwx. 1 root root 9 Jun 10 10:16 lib64 -> usr/lib64
drwx------ 2 root root 2 Dec 8 19:32 logs
drwx------ 2 root root 2 Dec 8 19:32 lost+found
drwx------ 2 root root 2 Dec 8 19:32 media
drwx------ 2 root root 2 Dec 8 19:32 mnt
drwx------ 2 root root 2 Dec 8 19:32 opt
drwxr-xr-x 2 root root 2 Dec 8 19:11 root
lrwxrwxrwx. 1 root root 8 Jun 10 10:16 sbin -> usr/sbin
drwx------ 2 root root 2 Dec 8 19:32 srv
drwx------ 2 root root 2 Dec 8 19:32 usr
drwxr-xr-x 6 root root 6 Dec 8 19:12 var
Rsync should take a while to complete.
Again, verify that /pool/boot content looks good ...
root@ubuntu:~# ls /pool/boot
NvVars
System.map-3.10.0-957.1.3.el7.x86_64
config-3.10.0-957.1.3.el7.x86_64
efi
grub
grub2
initramfs-0-rescue-1752044181d544db9f5a8665d63303c5.img
initramfs-0-rescue-396ef4d95e49c74ca2f65811f737be21.img
initramfs-3.10.0-957.1.3.el7.x86_64.img
initramfs-3.10.0-957.1.3.el7.x86_64kdump.img
initrd-plymouth.img
symvers-3.10.0-957.1.3.el7.x86_64.gz
vmlinuz-0-rescue-1752044181d544db9f5a8665d63303c5
vmlinuz-0-rescue-396ef4d95e49c74ca2f65811f737be21
vmlinuz-3.10.0-957.1.3.el7.x86_64
- Find the UUID for/dev/md127 and add the required entry in /pool/etc/fstab...
root@ubuntu:~# blkid /dev/md127
/dev/md127: LABEL="EFI" UUID="FD0B-B120" TYPE="vfat"
root@ubuntu:~# echo UUID=FD0B-B120 /boot vfat noatime,nofail,x-systemd.device-timeout=1 0 0 >> /pool/etc/fstab
*Note* Remove any existing entries in /pool/etc/fstab first.
- Set mountpoint type to legacy for the following datasets...
$ zfs set mountpoint=legacy rpool/var/log
$ zfs set mountpoint=legacy rpool/var/tmp
- Create a zvol for swap use
$ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
$ mkswap /dev/zvol/rpool/swap
$ swapon /dev/zvol/rpool/swap
- Add the following additional entries in /pool/etc/fstab
rpool/var/log /var/log zfs noatime,nodev,noexec,nosuid 0 0
rpool/var/tmp /var/tmp zfs noatime,nodev,nosuid 0 0
/dev/zvol/rpool/swap none swap defaults 0 0
- Unmount /pool
$ umount -R /pool
- Set pool cache file
$ zpool set cachefile=/etc/zfs/zpool.cache rpool
- Remount datasets and EFI boot partition.
$ zpool import -N -R /pool rpool -f
$ zfs mount rpool/ROOT/centos
$ zfs mount rpool/home
$ zfs mount rpool/home/root
$ zfs mount rpool/var/cache
$ zfs mount rpool/var/spool
$ mount -t zfs rpool/var/log /pool/var/log
$ mount -t zfs rpool/var/tmp /pool/var/tmp
$ mount /dev/md127 /pool/boot
*Note* If you receive an error like the following "filesystem 'rpool/var/tmp' cannot be mounted at '/pool/var/tmp' due to canonicalization error 2." then you need to create that folder first before mounting it.
- Create the following directories in /pool and mount bind the equivalent from Ubuntu live environment
$ for i in dev sys proc run tmp;do mkdir /pool/$i;done
$ for i in dev sys proc run;do mount -o bind /$i /pool/$i;done
- Copy /etc/zfs/zpool.cache in CentOS
$ cp /etc/zfs/zpool.cache /pool/etc/zfs
__---=From now on we must chroot in CentOS and work from there=---___
$ chroot /pool /bin/bash
- Install ZFS packages (if you haven't done already) in CentOS by following the instructions in here.
Here, I'm using kABI type ZFS packages (as opposed to DKMS).
$ yum install zfs zfs-dracut
- Edit /etc/dracut.conf and/modify the following line
add_dracutmodules+="zfs"
- Generate mdadm.conf file
$ mdadm --detail --scan >> /etc/mdadm.conf
- Generate initramfs, amend kernel version accordingly.
$ dracut -v -f /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img --kver 3.10.0-957.1.3.el7.x86_64
- Verify that zfs and mdadm related files are included in the initramfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep zfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep mdadm
- Install Grub2-EFI packages if you haven't done so. Also, note that you need to remove any Grub2-pc packages which are used for Legacy or BIOS boot mode.
$ yum install grub2-efi grub2-efi-x64-modules shim efibootmgr
$ yum remove grub2-pc grub2-pc-modules
- Edit /etc/default/grub and modify he following line
GRUB_CMDLINE_LINUX="crashkernel=auto root=ZFS=rpool/ROOT/centos"
- Install Grub files in /boot/efi
$ grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=centos --recheck --no-floppy
- Generate grub.cfg file
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
__---=Exit chroot environment=---__
$ exit
- Unmount /pool
$ umount -R /pool
- Export pool
$ zpool export rpool
- Finally reboot and check if you can boot from the hard disk(s)
$ reboot
In the past, we have been through a similar setup, but that was in BIOS boot mode. Here we are dealing with pure UEFI mode.We'll setup /boot and /boot/efi both in the same linux raid1 volume, formatted in VFAT. The rest of available space will be dedicated to ZFS.
What's needed
==========
- An existing CentOS7 installation. That can be in ext4,lvm or whatever.
- A bootable livecd with ZFS support. For this tutorial, I'm using a recent version of Ubuntu LTS livecd which has natively support for ZFS.
- 2 hard disks for for Linux RAID and ZFS, in mirror setup for redundancy.
- Minimum 2GB or RAM (4GB or more are recommended as ZFS is a RAM hungry filesystem).
- Some patience.. :-)
Steps
====
Download the latest version of Ubuntu and boot the system from it. Open Terminal and switch to root user..
$ sudo -i
Then enable "Universe repository".
$ apt-add-repository universeOptionally, if you are planning to access this system remotely via ssh, install openssh server
$ apt update
$ passwdThere is no current password; hit enter at that prompt.
$ apt install --yes openssh-serverAllow root user to login remotely via ssh.
$ vi /etc/ssh/sshd_configModify (uncomment) the following line...
PermitRootLogin yes
$ systemctl reload sshd
Install ZFS and some other tools.
$ apt install --yes mdadm gdisk dosfstools zfs-initramfsCheck available hard disks.
$ lsblkCreate the necessary partitions on both drives (always use long naming for the drives i.e not sda,sdb).
sda 8:0 0 10G 0 disk
sdb 8:16 0 10G 0 disk
- EFI/Boot partitions first (1G|B)
$ sgdisk -n3:1M:+1024M -t3:EF00 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0
$ sgdisk -n3:1M:+1024M -t3:EF00 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
- ZFS partitions
$ sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0- Create ZFS pool (rpool).
$ sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
$ zpool create -o ashift=12 \- Check pool status.
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O xattr=sa -O mountpoint=/ -R /pool \
rpool mirror /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1
root@ubuntu:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 ONLINE 0 0 0
errors: No known data errors
- Create ZFS datasets.
$ zfs create -o canmount=off -o mountpoint=none rpool/ROOT
$ zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/centos
$ zfs mount rpool/ROOT/centos
$ zfs create -o setuid=off rpool/home
$ zfs create -o mountpoint=/root rpool/home/root
$ zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
$ zfs create -o com.sun:auto-snapshot=false rpool/var/cache
$ zfs create -o acltype=posixacl -o xattr=sa rpool/var/log
$ zfs create rpool/var/spool
$ zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
- Check datasets and verify mountpoints.
root@ubuntu:~# df -h | grep rpool- Create a raid1 linux software raid array for the EFI/Boot partition.
rpool/ROOT/centos 8.7G 128K 8.7G 1% /pool
rpool/home 8.7G 128K 8.7G 1% /pool/home
rpool/home/root 8.7G 128K 8.7G 1% /pool/root
rpool/var/cache 8.7G 128K 8.7G 1% /pool/var/cache
rpool/var/log 8.7G 128K 8.7G 1% /pool/var/log
rpool/var/spool 8.7G 128K 8.7G 1% /pool/var/spool
rpool/var/tmp 8.7G 128K 8.7G 1% /pool/var/tmp
root@ubuntu:~# mdadm --create --verbose --metadata=0.90 /dev/md127 --level=mirror --raid-devices=2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3
Check md raid status ...
root@ubuntu:~# mdadm --detail /dev/md127
/dev/md127:
Version : 0.90
Creation Time : Sat Dec 8 19:41:40 2018
Raid Level : raid1
Array Size : 1048512 (1023.94 MiB 1073.68 MB)
Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Sat Dec 8 19:42:35 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 24ebcfb2:4956c4b1:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.18
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
- Format raid1 partition to FAT32/VFAT as per EFI specification...
$ mkdosfs -F 32 -n EFI /dev/md127
- Mount EFI partition to the /pool mountpoint ...
root@ubuntu:~# mkdir /pool/boot
root@ubuntu:~# mount /dev/md127 /pool/boot
root@ubuntu:~# df -h | grep pool
rpool/ROOT/centos 7.1G 1.1G 6.1G 15% /pool
rpool/home 7.7G 1.7G 6.1G 22% /pool/home
rpool/home/root 6.1G 256K 6.1G 1% /pool/root
rpool/var/cache 6.1G 128K 6.1G 1% /pool/var/cache
rpool/var/log 6.1G 128K 6.1G 1% /pool/var/log
rpool/var/spool 6.1G 128K 6.1G 1% /pool/var/spool
rpool/var/tmp 6.1G 128K 6.1G 1% /pool/var/tmp
/dev/md127 1022M 4.0K 1022M 1% /pool/boot
- Now that we have datasets mounted in /pool we can rsync existing ROOT CentOS7 system into it.
To do this we must ssh to the source machine and rsync whole ROOT system to the target system (Ubuntu). We presume that the source system is already configured in UEFI mode.
- Verify target (Ubuntu live environment) system's ip address..
root@ubuntu:~# ip addr | grep inet- Verify that ssh server is running ...
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.1.2.201/24 brd 10.1.2.255 scope global dynamic noprefixroute ens18
inet6 fe80::ee60:8526:421d:6245/64 scope link noprefixroute
root@ubuntu:~# systemctl status sshd- On the source system, rsync whole system to the target machine (better to boot from livecd and do this as root user) ..
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-12-08 19:22:04 UTC; 7s ago
Main PID: 6899 (sshd)
Tasks: 1 (limit: 2314)
CGroup: /system.slice/ssh.service
└─6899 /usr/sbin/sshd -D
$ rsync -avPX --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run / root@10.1.2.201:/pool/
- On the target system, monitor rsync copy process ...
root@ubuntu:~# ls -l /pool
total 23
lrwxrwxrwx. 1 root root 7 Jun 10 10:16 bin -> usr/bin
drwxr-xr-x. 150 root root 302 Dec 8 19:19 etc
drwxr-xr-x. 3 root root 3 Apr 11 2018 home
lrwxrwxrwx. 1 root root 8 Oct 10 18:32 lib -> /usr/lib
lrwxrwxrwx. 1 root root 9 Jun 10 10:16 lib64 -> usr/lib64
drwx------ 2 root root 2 Dec 8 19:32 logs
drwx------ 2 root root 2 Dec 8 19:32 lost+found
drwx------ 2 root root 2 Dec 8 19:32 media
drwx------ 2 root root 2 Dec 8 19:32 mnt
drwx------ 2 root root 2 Dec 8 19:32 opt
drwxr-xr-x 2 root root 2 Dec 8 19:11 root
lrwxrwxrwx. 1 root root 8 Jun 10 10:16 sbin -> usr/sbin
drwx------ 2 root root 2 Dec 8 19:32 srv
drwx------ 2 root root 2 Dec 8 19:32 usr
drwxr-xr-x 6 root root 6 Dec 8 19:12 var
Rsync should take a while to complete.
Again, verify that /pool/boot content looks good ...
root@ubuntu:~# ls /pool/boot
NvVars
System.map-3.10.0-957.1.3.el7.x86_64
config-3.10.0-957.1.3.el7.x86_64
efi
grub
grub2
initramfs-0-rescue-1752044181d544db9f5a8665d63303c5.img
initramfs-0-rescue-396ef4d95e49c74ca2f65811f737be21.img
initramfs-3.10.0-957.1.3.el7.x86_64.img
initramfs-3.10.0-957.1.3.el7.x86_64kdump.img
initrd-plymouth.img
symvers-3.10.0-957.1.3.el7.x86_64.gz
vmlinuz-0-rescue-1752044181d544db9f5a8665d63303c5
vmlinuz-0-rescue-396ef4d95e49c74ca2f65811f737be21
vmlinuz-3.10.0-957.1.3.el7.x86_64
- Find the UUID for/dev/md127 and add the required entry in /pool/etc/fstab...
root@ubuntu:~# blkid /dev/md127
/dev/md127: LABEL="EFI" UUID="FD0B-B120" TYPE="vfat"
root@ubuntu:~# echo UUID=FD0B-B120 /boot vfat noatime,nofail,x-systemd.device-timeout=1 0 0 >> /pool/etc/fstab
*Note* Remove any existing entries in /pool/etc/fstab first.
- Set mountpoint type to legacy for the following datasets...
$ zfs set mountpoint=legacy rpool/var/log
$ zfs set mountpoint=legacy rpool/var/tmp
- Create a zvol for swap use
$ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
$ mkswap /dev/zvol/rpool/swap
$ swapon /dev/zvol/rpool/swap
- Add the following additional entries in /pool/etc/fstab
rpool/var/log /var/log zfs noatime,nodev,noexec,nosuid 0 0
rpool/var/tmp /var/tmp zfs noatime,nodev,nosuid 0 0
/dev/zvol/rpool/swap none swap defaults 0 0
- Unmount /pool
$ umount -R /pool
- Set pool cache file
$ zpool set cachefile=/etc/zfs/zpool.cache rpool
- Remount datasets and EFI boot partition.
$ zpool import -N -R /pool rpool -f
$ zfs mount rpool/ROOT/centos
$ zfs mount rpool/home
$ zfs mount rpool/home/root
$ zfs mount rpool/var/cache
$ zfs mount rpool/var/spool
$ mount -t zfs rpool/var/log /pool/var/log
$ mount -t zfs rpool/var/tmp /pool/var/tmp
$ mount /dev/md127 /pool/boot
*Note* If you receive an error like the following "filesystem 'rpool/var/tmp' cannot be mounted at '/pool/var/tmp' due to canonicalization error 2." then you need to create that folder first before mounting it.
- Create the following directories in /pool and mount bind the equivalent from Ubuntu live environment
$ for i in dev sys proc run tmp;do mkdir /pool/$i;done
$ for i in dev sys proc run;do mount -o bind /$i /pool/$i;done
- Copy /etc/zfs/zpool.cache in CentOS
$ cp /etc/zfs/zpool.cache /pool/etc/zfs
__---=From now on we must chroot in CentOS and work from there=---___
$ chroot /pool /bin/bash
- Install ZFS packages (if you haven't done already) in CentOS by following the instructions in here.
Here, I'm using kABI type ZFS packages (as opposed to DKMS).
$ yum install zfs zfs-dracut
- Edit /etc/dracut.conf and/modify the following line
add_dracutmodules+="zfs"
- Generate mdadm.conf file
$ mdadm --detail --scan >> /etc/mdadm.conf
- Generate initramfs, amend kernel version accordingly.
$ dracut -v -f /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img --kver 3.10.0-957.1.3.el7.x86_64
- Verify that zfs and mdadm related files are included in the initramfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep zfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep mdadm
- Install Grub2-EFI packages if you haven't done so. Also, note that you need to remove any Grub2-pc packages which are used for Legacy or BIOS boot mode.
$ yum install grub2-efi grub2-efi-x64-modules shim efibootmgr
$ yum remove grub2-pc grub2-pc-modules
- Edit /etc/default/grub and modify he following line
GRUB_CMDLINE_LINUX="crashkernel=auto root=ZFS=rpool/ROOT/centos"
- Install Grub files in /boot/efi
$ grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=centos --recheck --no-floppy
- Generate grub.cfg file
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
Note, if you receive the following error "/usr/sbin/grub2-probe: error: failed to get canonical path of `/dev/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1" then export the following environment variable and repeat previous command.
$ export ZPOOL_VDEV_NAME_PATH=YES
__---=Exit chroot environment=---__
$ exit
- Unmount /pool
$ umount -R /pool
- Export pool
$ zpool export rpool
- Finally reboot and check if you can boot from the hard disk(s)
$ reboot
Saturday 18 August 2018
How to cast movies from Kodi to Chromecast on a PC
There are lots of guides out there showing how to cast media from an android device via Kodi to Chromecast, but just a few for how to cast from a PC.
After doing some research, I found that I can cast from Kodi to Chromecast by using VLC media player as an external player. Note that you will need VLC v3.0+ for this to work, since Chromecast support was added after this version.
The steps below show how I did it on my Ubuntu laptop, but same procedure should work on ther Linux distros as well. Also, same logic should apply on Windows machines.
Steps
Before this method, I was using Google Chrome to do the playback, by using its "cast tab" option, but the playback was very choppy compared to VLC.
After doing some research, I found that I can cast from Kodi to Chromecast by using VLC media player as an external player. Note that you will need VLC v3.0+ for this to work, since Chromecast support was added after this version.
The steps below show how I did it on my Ubuntu laptop, but same procedure should work on ther Linux distros as well. Also, same logic should apply on Windows machines.
Steps
- Install latest version of Kodi
- Install latest version of VLC media player (v3.0+)
- Copy the following content and create this file in your home folder .kodi/userdata/playercorefactory.xml . This is basically saying to Kodi to use VLC to playback the media content instead of its built in player.Please modify any settings according to your specific setup.
- Open Kodi and start a movie.
- Wait for VLC to open, it should start the playback shortly.
- In order to cast to Chromecast, use Playback , Renderer , Chromecast Device
Before this method, I was using Google Chrome to do the playback, by using its "cast tab" option, but the playback was very choppy compared to VLC.
Saturday 11 August 2018
Xorg unable to detect screens in CentOS7
Summary
Some background
BIOS is configured to use NVidia card as the primary graphics card however Xorg is unable to detect the screens. The following lines are logged at the end of the /var/log/Xorg.0.log file ...
[ 22.885] (EE) No devices detected.
[ 22.902]
Fatal server error:
[ 22.902] no screens found
[ 22.902]
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
[ 22.902] Please also check the log file at "/var/log/Xorg.0.log" for additional information.
[ 22.902]
The reason
This behaviour occurs due to the fact that the machine has 2 graphics card. Somehow the onboard Intel card takes priority over the NVidia card, even though the latter is set as the primary card in BIOS.As a result of this, Xorg is unable to detect the monitors, which are connected to NVidia card.
The solution
The solution for me was to create the following file /etc/X11/xorg.conf.d/10-nvidia.conf and add the following lines..
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:1:0:0"
EndSection
By doing this you are basically telling Xorg to use NVidia card as the primary card. One thing to note is the BusID line. You should find the correct bus id entry for your card. You can do so by using the following command...
# lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM107GL [Quadro K620] (rev a2)
Also, note that the format is slightly different in 10-nvidia.conf, (1:0:0) instead of (01:00.0).
Monday 30 July 2018
Issues booting to CentOS7 after resizing its root/swap LV
I recently faced the following strange issue...
Some background first
I installed CentOS7 on a VM, its assigned vhd size was 10G, nothing fancy so far ...
Installation was successful and I was able to boot into the O/S without issues.
Later, at some point, I decided to increase vhd size from 10G to 13GB. To accomplish that I followed the steps below...
- Turned off the VM
- Resized the vhd to the new size
- Turned on the VM
- Booted from LiveCD [CentOS7 dvd]
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 13G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 12.5G 0 part
├─centos-root 253:0 0 11.5G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
Here, I deleted the second partition (/dev/sda2), which is the LVM partition where centos/root and centos/swap LVs reside on. Then, I recreated it, by making sure that start sector was the same as before deleting the partition. Obviously, the end sector will be different as we increased vhd size. Finally, set partition type to 8e (LVM), saved settings and exited. By the way this is an 'extended' partition type.
[root@localhost ~]# lvdisplay centos/root
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID 1rlD1f-BSw4-R3rl-z22I-GtDN-rykj-cxdlXL
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-07-27 23:06:39 +0100
LV Status available
# open 1
LV Size <11 .52="" font="" gib="">11>
Current LE 2948
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
So far, so good...centos/root LV , which is the backing device for root file system (/), has its size increased.
The issue
After rebooting from LiveCD to the normal hard drive (now 13G), the following happened ...
- GRUB menu appeared and loaded default kernel (3.10.0-862.9.1.el7.x86_64).
- kernel loads and initial boot sequence starts.
- kernel hands over to initramfs (initrd).
- initrd fails to detect/active centos/root and centos/swap LVs, so after a while I'm being dropped to dracut shell...
In dracut shell, I execute the following commands:
- lvm lvchange -ay centos/root #success
- lvm lvchange -ay centos/swap #success
- ln -s /dev/mapper/centos-root /dev/root #required for boot process to continue to the real root fs (/).
- exit #exit dracut shell
At this point boot process continues and finally I can login to the normal CentOS7 installation. But the question remains, why I have to do all these stuff? wasn't this supposed to happen automatically ? The answer is, yes, it should, ....but something went wrong (obviously)... and now it needs my manual intervention to succeed.
The solution
After 5 days of continuous search, finally a solution was found. The problem was not in initramfs, but on the fact that /dev/sda2 partition(PV), after I resized it, somehow got 2 partition table signatures.
One was set as a "dos" partition with an offset "0x1fe" and second was set as "LVM_member" with an offset "0x8e".
This confused blkid in initramfs, during the initial boot stage, thinking that /dev/sda2 is *not* an LVM_member but rather a simple "dos" partition, hence refused to activate centos/root and centos/swap LVs which were required to boot the machine to the O/S.
What's interesting is that neither fdisk or parted were showing this second signature, however there was a trace of it in the dracut report file (/run/initramfs/rdsosreport.txt), specifically the one below..
The exact command was: 'wipefs -o 0x1fe -ff /dev/sda2'
Then a reboot and voila system booted without issues ...
Some background first
I installed CentOS7 on a VM, its assigned vhd size was 10G, nothing fancy so far ...
Installation was successful and I was able to boot into the O/S without issues.
Later, at some point, I decided to increase vhd size from 10G to 13GB. To accomplish that I followed the steps below...
- Turned off the VM
- Resized the vhd to the new size
- Turned on the VM
- Booted from LiveCD [CentOS7 dvd]
- lsblk
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 13G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 12.5G 0 part
├─centos-root 253:0 0 11.5G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
- fdisk /dev/sda
Here, I deleted the second partition (/dev/sda2), which is the LVM partition where centos/root and centos/swap LVs reside on. Then, I recreated it, by making sure that start sector was the same as before deleting the partition. Obviously, the end sector will be different as we increased vhd size. Finally, set partition type to 8e (LVM), saved settings and exited. By the way this is an 'extended' partition type.
- pvresize /dev/sda2
- lvresize -l +100%FREE centos/root
[root@localhost ~]# lvdisplay centos/root
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID 1rlD1f-BSw4-R3rl-z22I-GtDN-rykj-cxdlXL
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-07-27 23:06:39 +0100
LV Status available
# open 1
LV Size <11 .52="" font="" gib="">11>
Current LE 2948
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
So far, so good...centos/root LV , which is the backing device for root file system (/), has its size increased.
- mkdir /mnt/root #create a temp directory to mount centos/root LV
- mount /dev/mapper/centos-root /mnt/root #mount centos/root to /mnt/root
- xfs_growfs /mnt/root #increase root fs filesystem to match centos/root LV size. Did I mention that root fs is XFS ?
- umount /mnt/root #unmount root fs
- reboot #reboot the system, and boot from hdd this time [not LiveCD]
The issue
After rebooting from LiveCD to the normal hard drive (now 13G), the following happened ...
- GRUB menu appeared and loaded default kernel (3.10.0-862.9.1.el7.x86_64).
- kernel loads and initial boot sequence starts.
- kernel hands over to initramfs (initrd).
- initrd fails to detect/active centos/root and centos/swap LVs, so after a while I'm being dropped to dracut shell...
In dracut shell, I execute the following commands:
- lvm lvchange -ay centos/root #success
- lvm lvchange -ay centos/swap #success
- ln -s /dev/mapper/centos-root /dev/root #required for boot process to continue to the real root fs (/).
- exit #exit dracut shell
At this point boot process continues and finally I can login to the normal CentOS7 installation. But the question remains, why I have to do all these stuff? wasn't this supposed to happen automatically ? The answer is, yes, it should, ....but something went wrong (obviously)... and now it needs my manual intervention to succeed.
The solution
After 5 days of continuous search, finally a solution was found. The problem was not in initramfs, but on the fact that /dev/sda2 partition(PV), after I resized it, somehow got 2 partition table signatures.
One was set as a "dos" partition with an offset "0x1fe" and second was set as "LVM_member" with an offset "0x8e".
This confused blkid in initramfs, during the initial boot stage, thinking that /dev/sda2 is *not* an LVM_member but rather a simple "dos" partition, hence refused to activate centos/root and centos/swap LVs which were required to boot the machine to the O/S.
What's interesting is that neither fdisk or parted were showing this second signature, however there was a trace of it in the dracut report file (/run/initramfs/rdsosreport.txt), specifically the one below..
To fix this problem, I booted the system from a livecd and used "wipefs" utility to erase the problematic signature "dos" and leave only the correct one "LVM_member".
+ blkid
/dev/sr0: UUID="2016-10-28-12-18-36-00" LABEL="CentOS 7 x86_64" TYPE="iso9660" PTTYPE="dos"
/dev/sda1: UUID="a620a180-3a8c-4b5f-ad30-804f131a7261" TYPE="xfs"
/dev/sda2: PTTYPE="dos"
+ blkid -o udev
ID_FS_UUID=2016-10-28-12-18-36-00
ID_FS_UUID_ENC=2016-10-28-12-18-36-00
ID_FS_LABEL=CentOS_7_x86_64
ID_FS_LABEL_ENC=CentOS\x207\x20x86_64
ID_FS_TYPE=iso9660
ID_PART_TABLE_TYPE=dos
The exact command was: 'wipefs -o 0x1fe -ff /dev/sda2'
Then a reboot and voila system booted without issues ...
Sunday 29 July 2018
Configuring LINSTOR [DRBD9] storage plugin for Proxmox VE 5.x
DRBD9 is an opensource block level, replicated distributed storage system [link]. It can replicate the available local storage space of an existing computer node to one or more other nodes. It works similarly to the way RAID1 works on RAID controllers, but DRBD uses the network to replicate the data. DRBD9 operates at the block level, so it should perform faster on some workloads, as virtualisation for example, compared to other file system based distributed storage systems.
In this article we'll see how we can configure DRBD9 management system, called LINSTOR, to work with Proxmox VE, a well known opensource hypervisor. We assume that you already have a working Proxmox cluster (at least 3 nodes).
The next step will be to configure Linbit's (the company behind DRBD) repository on Proxmox, to download the necessary packages, required for DRBD9/Linstor to work. Below follows a breakdown of these components:
On all Proxmox nodes, install Linbit repository, make sure you modify PVERS variable value to match your PVE version (here is 5).
#apt install pve-headers
#apt install drbd-dkms drbd-utils
#rmmod drbd; modprobe drbd
#grep -q drbd /etc/modules || echo "drbd" >> /etc/modules
The next step is to install LINSTOR satellite and controller components. A satellite is a node which provides access on the low level storage system (LVM,ZFS) to DRBD9. A controller is a node which orchestrates satellite nodes and manages other things like resource assignments, volume management etc. A node can be a satellite and a controller at the same time.
#apt install linstor-controller linstor-satellite linstor-client
#systemctl start linstor-satellite
#systemctl enable linstor-satellite
#systemctl start linstor-controller
#systemctl status linstor-satellite
#systemctl status linstor-controller #only on controller node.
If the above were successful, we can now proceed further to configure the backing storage system for LINSTOR. In my example, I'm using LVM thin as a storage provider. You can also use ZFS if you wish to. LVM can be used to manage a hardware RAID array, MD Raid or a single disk (only for testing). So, at the high level, the storage configuration will be the following ...
Now that we configured drbdthinpool on all (3) PVE nodes, we can proceed configuring LINSTOR for using this pool to allocate LVs for the DRBD resources.
#linstor n l #show LINSTOR nodes and their status.
╭───────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ IPs ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pve1 ┊ COMBINED ┊ 192.168.198.20 ┊ Online ┊
┊ pve2 ┊ COMBINED ┊ 192.168.198.21 ┊ Online ┊
┊ pve3 ┊ COMBINED ┊ 192.168.198.22 ┊ Online ┊
╰───────────────────────────────────────────╯
#linstor spd c drbdpool #drbdpool will be the name of the SPD.
A breakdown of the above parameters:
-----------------------------------------------
sp (storage pool)
c (create)
pve1 (first proxmox node)
drbdpool (storage pool definition, we created this above)
lvmthin (use lvm thin driver)
vg_hdd (LVM Volume Group we created previously)
drbdthinpool (LVM thin pool we created previously)
------------------------------------------------
- The main difference between DRBD8 and DRBD9, is that the former is limited to only two storage nodes. However DRBD8 is still heavily used in the industry, perhaps more than DBRD9, at least for now ...
In this article we'll see how we can configure DRBD9 management system, called LINSTOR, to work with Proxmox VE, a well known opensource hypervisor. We assume that you already have a working Proxmox cluster (at least 3 nodes).
The next step will be to configure Linbit's (the company behind DRBD) repository on Proxmox, to download the necessary packages, required for DRBD9/Linstor to work. Below follows a breakdown of these components:
- DRBD9 kernel module
- DRBD low-level admin tools
- LINSTOR management tools
- LINSTOR storage plugin for Proxmox
On all Proxmox nodes, install Linbit repository, make sure you modify PVERS variable value to match your PVE version (here is 5).
# wget -O- https://packages.linbit.com/package-signing-pubkey.asc | apt-key add -
# PVERS=5 && echo "deb http://packages.linbit.com/proxmox/ proxmox-$PVERS drbd-9.0" \ > /etc/apt/sources.list.d/linbit.list
# apt-get update && apt-get install linstor-proxmoxThis will install the DRBD9 Proxmox storage plugin only. For an up to date guide about this plugin, please follow this link. To install the rest of DRBD9 components follow the steps below...
#apt install pve-headers
#apt install drbd-dkms drbd-utils
#rmmod drbd; modprobe drbd
#grep -q drbd /etc/modules || echo "drbd" >> /etc/modules
- Important! Kernel headers must be installed on each node, prior to installing drbd-dkms package, otherwise it will fail to build.
The next step is to install LINSTOR satellite and controller components. A satellite is a node which provides access on the low level storage system (LVM,ZFS) to DRBD9. A controller is a node which orchestrates satellite nodes and manages other things like resource assignments, volume management etc. A node can be a satellite and a controller at the same time.
- Important! There must be only one controller node active on a DRBD9 cluster.
- You must install packages below to all Proxmox nodes, with the exception of controller, which must be installed only on one of the Proxmox nodes.
#apt install linstor-controller linstor-satellite linstor-client
- Enable and start the satellite service on each node.
#systemctl start linstor-satellite
#systemctl enable linstor-satellite
- Now, you must decide which node will be the "controller" node. Once you decide install linstor-controller package on it and start the service.
[Update] It's now supported to deploy Linstor Controller as a separate HA VM within PVE cluster. For more on this, please check latest Linstor documentation.
#systemctl start linstor-controller
- Verify that the services are up and running on all nodes.
#systemctl status linstor-satellite
#systemctl status linstor-controller #only on controller node.
If the above were successful, we can now proceed further to configure the backing storage system for LINSTOR. In my example, I'm using LVM thin as a storage provider. You can also use ZFS if you wish to. LVM can be used to manage a hardware RAID array, MD Raid or a single disk (only for testing). So, at the high level, the storage configuration will be the following ...
(RAID ARRAY)--- <-->(LVM)---<--->(DRBD9)---<--->(VM)--->--->-->
- Suppose we have a storage array with a device assignment as /dev/sdb, let's configure LVM Thin on it..
#vgcreate vg_hdd /dev/sdb
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 19.75g 2.38g
/dev/sdb vg_hdd lvm2 a-- 30.00g 0
# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 19.75g 2.38g
vg_hdd 1 4 0 wz--n- 30.00g 0
We've created a volume group named vg_hdd which is using /dev/sdb [RAID array], for storage space provider.
- Create a LVM ThinPool on this Volume Group. This ThinPool will be called "drbdthinpool" and it will be later used by LINSTOR to allocate LVs for the DRBD resources, which in their turn will be used as VM virtual hard disks.
#lvcreate -L 29G -T vg_hdd/drbdthinpool
# lvs
LV VG Attr LSize Po
data pve twi-a-tz-- 8.25g
root pve -wi-ao---- 4.75g
swap pve -wi-ao---- 2.38g
drbdthinpool vg_hdd twi-aotz-- 29.96g
- Important! You must create drbdthinpool on all satellite nodes.
- Important! All LINSTOR commands below must be executed on the LINSTOR controller node. First check the nodes status.
#linstor n l #show LINSTOR nodes and their status.
╭───────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ IPs ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pve1 ┊ COMBINED ┊ 192.168.198.20 ┊ Online ┊
┊ pve2 ┊ COMBINED ┊ 192.168.198.21 ┊ Online ┊
┊ pve3 ┊ COMBINED ┊ 192.168.198.22 ┊ Online ┊
╰───────────────────────────────────────────╯
- Then, create a Storage Pool Definition (SPD) on LINSTOR.
#linstor spd c drbdpool #drbdpool will be the name of the SPD.
- Next, create the Storage Pools (SP) for each PVE nodes on LINSTOR.
A breakdown of the above parameters:
-----------------------------------------------
sp (storage pool)
c (create)
pve1 (first proxmox node)
drbdpool (storage pool definition, we created this above)
lvmthin (use lvm thin driver)
vg_hdd (LVM Volume Group we created previously)
drbdthinpool (LVM thin pool we created previously)
------------------------------------------------
- Repeat the above command to add SP for each individual PVE node on LINSTOR, i.e replace pve1 parameter with pve2 and pve3 respectively. When done it should look like below ...
root@pve1:~# linstor sp l
───────────────────────────────╮-----------------------------------
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ Free ┊ SupportsSnapshots ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┊ drbdpool ┊ pve1 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true ┊
┊ drbdpool ┊ pve2 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true ┊
┊ drbdpool ┊ pve3 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true ┊
╰─────────────────────────────────────────────────
- At this point we should have finished configuring LINSTOR, so we can proceed configuring its Proxmox storage plugin. To do so, we must add the following lines in Proxmox storage configuration file /etc/pve/storage.cfg ...
drbd: drbdstorage
content images,rootdir
controller 192.168.198.20
redundancy 3
A breakdown of the above parameters:
----------------------------------------------
----------------------------------------------
drbd (a hardcoded by Proxmox, definition for DRBD storage type, this cannot be changed.)
drbdstorage (a arbitrary name selected to define this storage backend on PVE, this can be changed)
content images,rootdir (this storage type supports Qemu VM and LXC type containers)
controller (this defines which node acts as LINSTOR controller. It's mandatory to type that node's IP address correctly, otherwise Proxmox will fail to create VM/LXCs on DRBD).
redundancy 3 (How many replicas will be created for each VM/LXC, minimum should be 2)
-----------------------------------------------
- By now we should be ready to go, so let's open PVE webgui management and create a Qemu VM or LXC container on DRBD9 storage...
- You should be able to see also the associated LV Thin Volumes for each individual DRBD resource/VM...
root@pve1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- 8.25g 0.00 0.01
root pve -wi-ao---- 4.75g
swap pve -wi-ao---- 2.38g
drbdthinpool vg_hdd twi-aotz-- 29.96g 22.32 17.44
vm-100-disk-1_00000 vg_hdd Vwi-aotz-- 13.01g drbdthinpool 23.12
vm-101-disk-1_00000 vg_hdd Vwi-aotz-- 9.01g drbdthinpool 10.60
vm-102-disk-1_00000 vg_hdd Vwi-aotz-- 10.00g drbdthinpool 27.22
- If you have any problems, please read carefully DRBD9/LINSTOR documentation at Linbit's web site, to familiarise yourself with LINSTOR command line or subscribe to DRBD mailing list.
Troubleshooting mirrored dynamic disks on Windows Server 2008 R2
Problem Description:
You have a Windows Server 2003 or 2008 (R2) Server configured with mirrored dynamic disks. One of disks fails and you need to replace failed disk with a new one. You buy the new disk, connect it to the slot where the failed disk was and power on your server. You go to disk management convert the new disk to dynamic disk and remove old (missing mirror). Now you are ready to create the mirror again but suddenly you get the following error message “All disks holding extents for a given volume must have the same sector size”
As described here , what this message says is that your old disk in the mirrored array (500gb dell in my case) and the new disk (1 tb wd black in my case) have different sector sizes (512 bytes for 500gb disk and 4096 bytes for 1 tb disk).The new 1 tb disks was an Advanced Formatted Disk, as a result I cannot create a mirrored volume between them.I need to buy another 1 tb 4K formatted hdd , clone or create backup image & restore the old (500gb) disk to the new disk and finally create the mirrored array between 1 tb disks.
Ok, so I bought another 1 tb wd black and I was ready to clone the data from the old disk to the new one. The 1st problem that I faced is that most of Imaging programs cannot handle Dynamic Disks at all. For example Acronis, needs a special add-on called “Plus Pack” to be able to create an Image of dynamic disk (you must break the mirror first, because mirrored volumes are not supported) and then restore it to the new one.
Solution:
Personally I followed the steps below:
- Downloaded Hiren’s boot cd and used Norton Ghost to clone 500gb hdd -> 1tb hdd.
- After cloning I had a BASIC 1 tb with all the data inside. Removed 500gb disk and left only this 1tb disk on slot and booted the machine.
- Got “0xc000000e Info: The boot selection failed because a required device is inaccessible”.
- I put Windows Server 2008 R2 dvd on dvd-rw and booted from there. Fortunately boot cd found the bootloader problems and automatically fixed them. If that did’t worked for you, try this article.
- Rebooted again and voila, finally the system booted correctly!
- Powered off the server, put the 2nd 1tb hdd, powered on, converted both disks to dynamic disks and finally enabled mirroring between them.
Subscribe to:
Posts (Atom)