In the previous article, we showed how to setup CentOS7 with ROOT on ZFS. Here we are about to see how you can snapshot and clone an existing installation, then configure GRUB to boot from the clone.
This can be useful in situations like, when upgrading the operating system, when anything can go wrong or simply when you want to have multiple, individual bootable instances of the same O/S.
Steps
====
Assuming that you have a working CentOS7 with ROOT on ZFS, the first step would be to create a snapshot of the current state of the O/S and then after, clone it.
$ zfs snapshot rpool/ROOT/centos@snap-clone1
$ zfs clone rpool/ROOT/centos@snap-clone1 rpool/ROOT/centos-clone1
Set the mount point for the clone ...
$ zfs set mountpoint=/ rpool/ROOT/centos-clone1
Now, we need to create a customised GRUB entry in order to boot from the clone. To do so, copy/paste the following entries into /etc/grub.d/40_custom .
Adapt kernel version and initramfs accordingly. The only change is the one highlighted in red color. That entry instructs GRUB to boot from the clone instead of the normal dataset.
[root@localhost ]# vi /etc/grub.d/40_custom
menuentry 'CentOS Clone1' --class centos --class gnu-linux --class gnu --class os {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod part_gpt
insmod diskfilter
insmod mdraid09
insmod fat
set root='mduuid/cc266933031fb13be368bf24bd0fce41'
linuxefi /vmlinuz-3.10.0-957.1.3.el7.x86_64 root=ZFS=rpool/ROOT/centos-clone1 ro crashkernel=auto
initrdefi /initramfs-3.10.0-957.1.3.el7.x86_64.img
}
Then run grub2-mkconfig to re-generate the GRUB configuration file.
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
This should add the customised entry into the boot list.
Since /boot is located on a different partition (linux raid), no further steps are required.
Each individual clone instance will of course use the same kernel and initramfs.
That's all, now let's reboot and choose "CentOS Clone1" entry in the GRUB boot menu. If successful, that should boot into the cloned version of CentOS. To verify this, simply run the following command. Note the item highlighted in red color.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/centos-clone1 9.3G 3.5G 5.9G 38% /
Now you have two individual versions of CentOS where you can do whatever changes you want, without worrying about destroying the system.
By using the same steps as described above you can create multiple clones and GRUB boot entries, to boot multiple individual versions of the same O/S.
Friday, 14 December 2018
Sunday, 9 December 2018
CentOS7 | ROOT on ZFS in UEFI Mode
In this guide we are going to see how we can "migrate" an existing CentOS7 installation into ROOT on ZFS.
In the past, we have been through a similar setup, but that was in BIOS boot mode. Here we are dealing with pure UEFI mode.We'll setup /boot and /boot/efi both in the same linux raid1 volume, formatted in VFAT. The rest of available space will be dedicated to ZFS.
What's needed
==========
- An existing CentOS7 installation. That can be in ext4,lvm or whatever.
- A bootable livecd with ZFS support. For this tutorial, I'm using a recent version of Ubuntu LTS livecd which has natively support for ZFS.
- 2 hard disks for for Linux RAID and ZFS, in mirror setup for redundancy.
- Minimum 2GB or RAM (4GB or more are recommended as ZFS is a RAM hungry filesystem).
- Some patience.. :-)
Steps
====
Download the latest version of Ubuntu and boot the system from it. Open Terminal and switch to root user..
Install ZFS and some other tools.
- EFI/Boot partitions first (1G|B)
- ZFS partitions
root@ubuntu:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 ONLINE 0 0 0
errors: No known data errors
- Create ZFS datasets.
- Check datasets and verify mountpoints.
root@ubuntu:~# mdadm --create --verbose --metadata=0.90 /dev/md127 --level=mirror --raid-devices=2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3
Check md raid status ...
root@ubuntu:~# mdadm --detail /dev/md127
/dev/md127:
Version : 0.90
Creation Time : Sat Dec 8 19:41:40 2018
Raid Level : raid1
Array Size : 1048512 (1023.94 MiB 1073.68 MB)
Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Sat Dec 8 19:42:35 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 24ebcfb2:4956c4b1:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.18
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
- Format raid1 partition to FAT32/VFAT as per EFI specification...
- Mount EFI partition to the /pool mountpoint ...
root@ubuntu:~# mkdir /pool/boot
root@ubuntu:~# mount /dev/md127 /pool/boot
root@ubuntu:~# df -h | grep pool
rpool/ROOT/centos 7.1G 1.1G 6.1G 15% /pool
rpool/home 7.7G 1.7G 6.1G 22% /pool/home
rpool/home/root 6.1G 256K 6.1G 1% /pool/root
rpool/var/cache 6.1G 128K 6.1G 1% /pool/var/cache
rpool/var/log 6.1G 128K 6.1G 1% /pool/var/log
rpool/var/spool 6.1G 128K 6.1G 1% /pool/var/spool
rpool/var/tmp 6.1G 128K 6.1G 1% /pool/var/tmp
/dev/md127 1022M 4.0K 1022M 1% /pool/boot
- Now that we have datasets mounted in /pool we can rsync existing ROOT CentOS7 system into it.
To do this we must ssh to the source machine and rsync whole ROOT system to the target system (Ubuntu). We presume that the source system is already configured in UEFI mode.
- Verify target (Ubuntu live environment) system's ip address..
$ rsync -avPX --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run / root@10.1.2.201:/pool/
- On the target system, monitor rsync copy process ...
root@ubuntu:~# ls -l /pool
total 23
lrwxrwxrwx. 1 root root 7 Jun 10 10:16 bin -> usr/bin
drwxr-xr-x. 150 root root 302 Dec 8 19:19 etc
drwxr-xr-x. 3 root root 3 Apr 11 2018 home
lrwxrwxrwx. 1 root root 8 Oct 10 18:32 lib -> /usr/lib
lrwxrwxrwx. 1 root root 9 Jun 10 10:16 lib64 -> usr/lib64
drwx------ 2 root root 2 Dec 8 19:32 logs
drwx------ 2 root root 2 Dec 8 19:32 lost+found
drwx------ 2 root root 2 Dec 8 19:32 media
drwx------ 2 root root 2 Dec 8 19:32 mnt
drwx------ 2 root root 2 Dec 8 19:32 opt
drwxr-xr-x 2 root root 2 Dec 8 19:11 root
lrwxrwxrwx. 1 root root 8 Jun 10 10:16 sbin -> usr/sbin
drwx------ 2 root root 2 Dec 8 19:32 srv
drwx------ 2 root root 2 Dec 8 19:32 usr
drwxr-xr-x 6 root root 6 Dec 8 19:12 var
Rsync should take a while to complete.
Again, verify that /pool/boot content looks good ...
root@ubuntu:~# ls /pool/boot
NvVars
System.map-3.10.0-957.1.3.el7.x86_64
config-3.10.0-957.1.3.el7.x86_64
efi
grub
grub2
initramfs-0-rescue-1752044181d544db9f5a8665d63303c5.img
initramfs-0-rescue-396ef4d95e49c74ca2f65811f737be21.img
initramfs-3.10.0-957.1.3.el7.x86_64.img
initramfs-3.10.0-957.1.3.el7.x86_64kdump.img
initrd-plymouth.img
symvers-3.10.0-957.1.3.el7.x86_64.gz
vmlinuz-0-rescue-1752044181d544db9f5a8665d63303c5
vmlinuz-0-rescue-396ef4d95e49c74ca2f65811f737be21
vmlinuz-3.10.0-957.1.3.el7.x86_64
- Find the UUID for/dev/md127 and add the required entry in /pool/etc/fstab...
root@ubuntu:~# blkid /dev/md127
/dev/md127: LABEL="EFI" UUID="FD0B-B120" TYPE="vfat"
root@ubuntu:~# echo UUID=FD0B-B120 /boot vfat noatime,nofail,x-systemd.device-timeout=1 0 0 >> /pool/etc/fstab
*Note* Remove any existing entries in /pool/etc/fstab first.
- Set mountpoint type to legacy for the following datasets...
$ zfs set mountpoint=legacy rpool/var/log
$ zfs set mountpoint=legacy rpool/var/tmp
- Create a zvol for swap use
$ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
$ mkswap /dev/zvol/rpool/swap
$ swapon /dev/zvol/rpool/swap
- Add the following additional entries in /pool/etc/fstab
rpool/var/log /var/log zfs noatime,nodev,noexec,nosuid 0 0
rpool/var/tmp /var/tmp zfs noatime,nodev,nosuid 0 0
/dev/zvol/rpool/swap none swap defaults 0 0
- Unmount /pool
$ umount -R /pool
- Set pool cache file
$ zpool set cachefile=/etc/zfs/zpool.cache rpool
- Remount datasets and EFI boot partition.
$ zpool import -N -R /pool rpool -f
$ zfs mount rpool/ROOT/centos
$ zfs mount rpool/home
$ zfs mount rpool/home/root
$ zfs mount rpool/var/cache
$ zfs mount rpool/var/spool
$ mount -t zfs rpool/var/log /pool/var/log
$ mount -t zfs rpool/var/tmp /pool/var/tmp
$ mount /dev/md127 /pool/boot
*Note* If you receive an error like the following "filesystem 'rpool/var/tmp' cannot be mounted at '/pool/var/tmp' due to canonicalization error 2." then you need to create that folder first before mounting it.
- Create the following directories in /pool and mount bind the equivalent from Ubuntu live environment
$ for i in dev sys proc run tmp;do mkdir /pool/$i;done
$ for i in dev sys proc run;do mount -o bind /$i /pool/$i;done
- Copy /etc/zfs/zpool.cache in CentOS
$ cp /etc/zfs/zpool.cache /pool/etc/zfs
__---=From now on we must chroot in CentOS and work from there=---___
$ chroot /pool /bin/bash
- Install ZFS packages (if you haven't done already) in CentOS by following the instructions in here.
Here, I'm using kABI type ZFS packages (as opposed to DKMS).
$ yum install zfs zfs-dracut
- Edit /etc/dracut.conf and/modify the following line
add_dracutmodules+="zfs"
- Generate mdadm.conf file
$ mdadm --detail --scan >> /etc/mdadm.conf
- Generate initramfs, amend kernel version accordingly.
$ dracut -v -f /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img --kver 3.10.0-957.1.3.el7.x86_64
- Verify that zfs and mdadm related files are included in the initramfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep zfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep mdadm
- Install Grub2-EFI packages if you haven't done so. Also, note that you need to remove any Grub2-pc packages which are used for Legacy or BIOS boot mode.
$ yum install grub2-efi grub2-efi-x64-modules shim efibootmgr
$ yum remove grub2-pc grub2-pc-modules
- Edit /etc/default/grub and modify he following line
GRUB_CMDLINE_LINUX="crashkernel=auto root=ZFS=rpool/ROOT/centos"
- Install Grub files in /boot/efi
$ grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=centos --recheck --no-floppy
- Generate grub.cfg file
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
__---=Exit chroot environment=---__
$ exit
- Unmount /pool
$ umount -R /pool
- Export pool
$ zpool export rpool
- Finally reboot and check if you can boot from the hard disk(s)
$ reboot
In the past, we have been through a similar setup, but that was in BIOS boot mode. Here we are dealing with pure UEFI mode.We'll setup /boot and /boot/efi both in the same linux raid1 volume, formatted in VFAT. The rest of available space will be dedicated to ZFS.
What's needed
==========
- An existing CentOS7 installation. That can be in ext4,lvm or whatever.
- A bootable livecd with ZFS support. For this tutorial, I'm using a recent version of Ubuntu LTS livecd which has natively support for ZFS.
- 2 hard disks for for Linux RAID and ZFS, in mirror setup for redundancy.
- Minimum 2GB or RAM (4GB or more are recommended as ZFS is a RAM hungry filesystem).
- Some patience.. :-)
Steps
====
Download the latest version of Ubuntu and boot the system from it. Open Terminal and switch to root user..
$ sudo -i
Then enable "Universe repository".
$ apt-add-repository universeOptionally, if you are planning to access this system remotely via ssh, install openssh server
$ apt update
$ passwdThere is no current password; hit enter at that prompt.
$ apt install --yes openssh-serverAllow root user to login remotely via ssh.
$ vi /etc/ssh/sshd_configModify (uncomment) the following line...
PermitRootLogin yes
$ systemctl reload sshd
Install ZFS and some other tools.
$ apt install --yes mdadm gdisk dosfstools zfs-initramfsCheck available hard disks.
$ lsblkCreate the necessary partitions on both drives (always use long naming for the drives i.e not sda,sdb).
sda 8:0 0 10G 0 disk
sdb 8:16 0 10G 0 disk
- EFI/Boot partitions first (1G|B)
$ sgdisk -n3:1M:+1024M -t3:EF00 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0
$ sgdisk -n3:1M:+1024M -t3:EF00 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
- ZFS partitions
$ sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0- Create ZFS pool (rpool).
$ sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
$ zpool create -o ashift=12 \- Check pool status.
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O xattr=sa -O mountpoint=/ -R /pool \
rpool mirror /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1
root@ubuntu:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 ONLINE 0 0 0
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 ONLINE 0 0 0
errors: No known data errors
- Create ZFS datasets.
$ zfs create -o canmount=off -o mountpoint=none rpool/ROOT
$ zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/centos
$ zfs mount rpool/ROOT/centos
$ zfs create -o setuid=off rpool/home
$ zfs create -o mountpoint=/root rpool/home/root
$ zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
$ zfs create -o com.sun:auto-snapshot=false rpool/var/cache
$ zfs create -o acltype=posixacl -o xattr=sa rpool/var/log
$ zfs create rpool/var/spool
$ zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
- Check datasets and verify mountpoints.
root@ubuntu:~# df -h | grep rpool- Create a raid1 linux software raid array for the EFI/Boot partition.
rpool/ROOT/centos 8.7G 128K 8.7G 1% /pool
rpool/home 8.7G 128K 8.7G 1% /pool/home
rpool/home/root 8.7G 128K 8.7G 1% /pool/root
rpool/var/cache 8.7G 128K 8.7G 1% /pool/var/cache
rpool/var/log 8.7G 128K 8.7G 1% /pool/var/log
rpool/var/spool 8.7G 128K 8.7G 1% /pool/var/spool
rpool/var/tmp 8.7G 128K 8.7G 1% /pool/var/tmp
root@ubuntu:~# mdadm --create --verbose --metadata=0.90 /dev/md127 --level=mirror --raid-devices=2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part3
Check md raid status ...
root@ubuntu:~# mdadm --detail /dev/md127
/dev/md127:
Version : 0.90
Creation Time : Sat Dec 8 19:41:40 2018
Raid Level : raid1
Array Size : 1048512 (1023.94 MiB 1073.68 MB)
Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Sat Dec 8 19:42:35 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : 24ebcfb2:4956c4b1:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.18
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
- Format raid1 partition to FAT32/VFAT as per EFI specification...
$ mkdosfs -F 32 -n EFI /dev/md127
- Mount EFI partition to the /pool mountpoint ...
root@ubuntu:~# mkdir /pool/boot
root@ubuntu:~# mount /dev/md127 /pool/boot
root@ubuntu:~# df -h | grep pool
rpool/ROOT/centos 7.1G 1.1G 6.1G 15% /pool
rpool/home 7.7G 1.7G 6.1G 22% /pool/home
rpool/home/root 6.1G 256K 6.1G 1% /pool/root
rpool/var/cache 6.1G 128K 6.1G 1% /pool/var/cache
rpool/var/log 6.1G 128K 6.1G 1% /pool/var/log
rpool/var/spool 6.1G 128K 6.1G 1% /pool/var/spool
rpool/var/tmp 6.1G 128K 6.1G 1% /pool/var/tmp
/dev/md127 1022M 4.0K 1022M 1% /pool/boot
- Now that we have datasets mounted in /pool we can rsync existing ROOT CentOS7 system into it.
To do this we must ssh to the source machine and rsync whole ROOT system to the target system (Ubuntu). We presume that the source system is already configured in UEFI mode.
- Verify target (Ubuntu live environment) system's ip address..
root@ubuntu:~# ip addr | grep inet- Verify that ssh server is running ...
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.1.2.201/24 brd 10.1.2.255 scope global dynamic noprefixroute ens18
inet6 fe80::ee60:8526:421d:6245/64 scope link noprefixroute
root@ubuntu:~# systemctl status sshd- On the source system, rsync whole system to the target machine (better to boot from livecd and do this as root user) ..
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-12-08 19:22:04 UTC; 7s ago
Main PID: 6899 (sshd)
Tasks: 1 (limit: 2314)
CGroup: /system.slice/ssh.service
└─6899 /usr/sbin/sshd -D
$ rsync -avPX --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run / root@10.1.2.201:/pool/
- On the target system, monitor rsync copy process ...
root@ubuntu:~# ls -l /pool
total 23
lrwxrwxrwx. 1 root root 7 Jun 10 10:16 bin -> usr/bin
drwxr-xr-x. 150 root root 302 Dec 8 19:19 etc
drwxr-xr-x. 3 root root 3 Apr 11 2018 home
lrwxrwxrwx. 1 root root 8 Oct 10 18:32 lib -> /usr/lib
lrwxrwxrwx. 1 root root 9 Jun 10 10:16 lib64 -> usr/lib64
drwx------ 2 root root 2 Dec 8 19:32 logs
drwx------ 2 root root 2 Dec 8 19:32 lost+found
drwx------ 2 root root 2 Dec 8 19:32 media
drwx------ 2 root root 2 Dec 8 19:32 mnt
drwx------ 2 root root 2 Dec 8 19:32 opt
drwxr-xr-x 2 root root 2 Dec 8 19:11 root
lrwxrwxrwx. 1 root root 8 Jun 10 10:16 sbin -> usr/sbin
drwx------ 2 root root 2 Dec 8 19:32 srv
drwx------ 2 root root 2 Dec 8 19:32 usr
drwxr-xr-x 6 root root 6 Dec 8 19:12 var
Rsync should take a while to complete.
Again, verify that /pool/boot content looks good ...
root@ubuntu:~# ls /pool/boot
NvVars
System.map-3.10.0-957.1.3.el7.x86_64
config-3.10.0-957.1.3.el7.x86_64
efi
grub
grub2
initramfs-0-rescue-1752044181d544db9f5a8665d63303c5.img
initramfs-0-rescue-396ef4d95e49c74ca2f65811f737be21.img
initramfs-3.10.0-957.1.3.el7.x86_64.img
initramfs-3.10.0-957.1.3.el7.x86_64kdump.img
initrd-plymouth.img
symvers-3.10.0-957.1.3.el7.x86_64.gz
vmlinuz-0-rescue-1752044181d544db9f5a8665d63303c5
vmlinuz-0-rescue-396ef4d95e49c74ca2f65811f737be21
vmlinuz-3.10.0-957.1.3.el7.x86_64
- Find the UUID for/dev/md127 and add the required entry in /pool/etc/fstab...
root@ubuntu:~# blkid /dev/md127
/dev/md127: LABEL="EFI" UUID="FD0B-B120" TYPE="vfat"
root@ubuntu:~# echo UUID=FD0B-B120 /boot vfat noatime,nofail,x-systemd.device-timeout=1 0 0 >> /pool/etc/fstab
*Note* Remove any existing entries in /pool/etc/fstab first.
- Set mountpoint type to legacy for the following datasets...
$ zfs set mountpoint=legacy rpool/var/log
$ zfs set mountpoint=legacy rpool/var/tmp
- Create a zvol for swap use
$ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
$ mkswap /dev/zvol/rpool/swap
$ swapon /dev/zvol/rpool/swap
- Add the following additional entries in /pool/etc/fstab
rpool/var/log /var/log zfs noatime,nodev,noexec,nosuid 0 0
rpool/var/tmp /var/tmp zfs noatime,nodev,nosuid 0 0
/dev/zvol/rpool/swap none swap defaults 0 0
- Unmount /pool
$ umount -R /pool
- Set pool cache file
$ zpool set cachefile=/etc/zfs/zpool.cache rpool
- Remount datasets and EFI boot partition.
$ zpool import -N -R /pool rpool -f
$ zfs mount rpool/ROOT/centos
$ zfs mount rpool/home
$ zfs mount rpool/home/root
$ zfs mount rpool/var/cache
$ zfs mount rpool/var/spool
$ mount -t zfs rpool/var/log /pool/var/log
$ mount -t zfs rpool/var/tmp /pool/var/tmp
$ mount /dev/md127 /pool/boot
*Note* If you receive an error like the following "filesystem 'rpool/var/tmp' cannot be mounted at '/pool/var/tmp' due to canonicalization error 2." then you need to create that folder first before mounting it.
- Create the following directories in /pool and mount bind the equivalent from Ubuntu live environment
$ for i in dev sys proc run tmp;do mkdir /pool/$i;done
$ for i in dev sys proc run;do mount -o bind /$i /pool/$i;done
- Copy /etc/zfs/zpool.cache in CentOS
$ cp /etc/zfs/zpool.cache /pool/etc/zfs
__---=From now on we must chroot in CentOS and work from there=---___
$ chroot /pool /bin/bash
- Install ZFS packages (if you haven't done already) in CentOS by following the instructions in here.
Here, I'm using kABI type ZFS packages (as opposed to DKMS).
$ yum install zfs zfs-dracut
- Edit /etc/dracut.conf and/modify the following line
add_dracutmodules+="zfs"
- Generate mdadm.conf file
$ mdadm --detail --scan >> /etc/mdadm.conf
- Generate initramfs, amend kernel version accordingly.
$ dracut -v -f /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img --kver 3.10.0-957.1.3.el7.x86_64
- Verify that zfs and mdadm related files are included in the initramfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep zfs
$ lsinitrd /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img | grep mdadm
- Install Grub2-EFI packages if you haven't done so. Also, note that you need to remove any Grub2-pc packages which are used for Legacy or BIOS boot mode.
$ yum install grub2-efi grub2-efi-x64-modules shim efibootmgr
$ yum remove grub2-pc grub2-pc-modules
- Edit /etc/default/grub and modify he following line
GRUB_CMDLINE_LINUX="crashkernel=auto root=ZFS=rpool/ROOT/centos"
- Install Grub files in /boot/efi
$ grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=centos --recheck --no-floppy
- Generate grub.cfg file
$ grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
Note, if you receive the following error "/usr/sbin/grub2-probe: error: failed to get canonical path of `/dev/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1" then export the following environment variable and repeat previous command.
$ export ZPOOL_VDEV_NAME_PATH=YES
__---=Exit chroot environment=---__
$ exit
- Unmount /pool
$ umount -R /pool
- Export pool
$ zpool export rpool
- Finally reboot and check if you can boot from the hard disk(s)
$ reboot
Subscribe to:
Posts (Atom)