Sunday 29 July 2018

Configuring LINSTOR [DRBD9] storage plugin for Proxmox VE 5.x

DRBD9 is an opensource block level, replicated distributed storage system [link]. It can replicate the available local storage space of an existing computer node to one or more other nodes. It works similarly to the way RAID1 works on RAID controllers, but DRBD uses the network to replicate the data. DRBD9 operates at the block level, so it should perform faster on some workloads, as virtualisation for example, compared to other file system based distributed storage systems.

  • The main difference between DRBD8 and DRBD9, is that the former is limited to only two storage nodes. However DRBD8 is still heavily used in the industry, perhaps more than DBRD9, at least for now ...


In this article we'll see how we can configure DRBD9 management system, called LINSTOR, to work with Proxmox VE, a well known opensource hypervisor. We assume that you already have a working Proxmox cluster (at least 3 nodes).

The next step will be to configure Linbit's (the company behind DRBD) repository on Proxmox, to download the necessary packages, required for DRBD9/Linstor to work. Below follows a breakdown of these components:


  • DRBD9 kernel module
  • DRBD low-level admin tools
  • LINSTOR management tools
  • LINSTOR storage plugin for Proxmox

On all Proxmox nodes, install Linbit repository, make sure you modify PVERS variable value to match your PVE version (here is 5).

# wget -O- https://packages.linbit.com/package-signing-pubkey.asc | apt-key add -
# PVERS=5 && echo "deb http://packages.linbit.com/proxmox/ proxmox-$PVERS drbd-9.0" \ > /etc/apt/sources.list.d/linbit.list
 
# apt-get update && apt-get install linstor-proxmox
This will install the DRBD9 Proxmox storage plugin only. For an up to date guide about this plugin, please follow this link. To install the rest of DRBD9 components follow the steps below...

#apt install pve-headers
#apt install drbd-dkms drbd-utils
#rmmod drbd; modprobe drbd
#grep -q drbd /etc/modules || echo "drbd" >> /etc/modules


  • Important! Kernel headers must be installed on each node, prior to installing drbd-dkms package, otherwise it will fail to build.


The next step is to install LINSTOR satellite and controller components. A satellite is a node which provides access on the low level storage system (LVM,ZFS) to DRBD9. A controller is a node which orchestrates satellite nodes and manages other things like resource assignments, volume management etc. A node can be a satellite and a controller at the same time.

  • Important! There must be only one controller node active on a DRBD9 cluster.
  • You must install packages below to all Proxmox nodes, with the exception of controller, which must be installed only on one of the Proxmox nodes.


#apt install linstor-controller linstor-satellite linstor-client


  • Enable and start the satellite service on each node.

#systemctl start linstor-satellite
#systemctl enable linstor-satellite


  • Now, you must decide which node will be the "controller" node. Once you decide install linstor-controller package on it and start the service.

    [Update] It's now supported to deploy Linstor Controller as a separate HA VM  within PVE cluster. For more on this, please check latest Linstor documentation.
#systemctl enable linstor-controller
#systemctl start linstor-controller

  • Verify that the services are up and running on all nodes.

#systemctl status linstor-satellite
#systemctl status linstor-controller    #only on controller node.

If the above were successful, we can now proceed further to configure the backing storage system for LINSTOR. In my example, I'm using LVM thin as a storage provider. You can also use ZFS if you wish to. LVM can be used to manage a hardware RAID array, MD Raid or a single disk (only for testing). So, at the high level, the storage configuration will be the following ...

(RAID ARRAY)--- <-->(LVM)---<--->(DRBD9)---<--->(VM)
  • Suppose we have a storage array with a device assignment as /dev/sdb, let's configure LVM Thin on it..
#vgcreate vg_hdd /dev/sdb


# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda3  pve    lvm2 a--  19.75g 2.38g
  /dev/sdb   vg_hdd lvm2 a--  30.00g    0

# vgs
VG     #PV #LV #SN Attr   VSize  VFree
pve      1   3   0 wz--n- 19.75g 2.38g
vg_hdd   1   4   0 wz--n- 30.00g    0

We've created a volume group named vg_hdd which is using /dev/sdb [RAID array], for storage space provider.

  • Create a LVM ThinPool on this Volume Group. This ThinPool will be called "drbdthinpool" and it will be later used by LINSTOR to allocate LVs for the DRBD resources, which in their turn will be used as VM virtual hard disks.

#lvcreate -L 29G -T vg_hdd/drbdthinpool

# lvs
  LV                  VG     Attr       LSize  Po
  data                pve    twi-a-tz--  8.25g
  root                pve    -wi-ao----  4.75g
  swap                pve    -wi-ao----  2.38g
  drbdthinpool        vg_hdd twi-aotz-- 29.96g

  • Important! You must create drbdthinpool on all satellite nodes.
Now that we configured drbdthinpool on all (3) PVE nodes, we can proceed  configuring LINSTOR for using this pool to allocate LVs for the DRBD resources.

  • Important! All LINSTOR commands below must be executed on the LINSTOR controller node. First check the nodes status.

#linstor n l    #show LINSTOR nodes and their status.

╭───────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ IPs            ┊ State  ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pve1 ┊ COMBINED ┊ 192.168.198.20 ┊ Online ┊
┊ pve2 ┊ COMBINED ┊ 192.168.198.21 ┊ Online ┊
┊ pve3 ┊ COMBINED ┊ 192.168.198.22 ┊ Online ┊
╰───────────────────────────────────────────╯

  • Then, create a Storage Pool Definition (SPD) on LINSTOR.

#linstor spd c drbdpool   #drbdpool will be the name of the SPD.

  • Next, create the Storage Pools (SP) for each PVE nodes on LINSTOR.
# linstor sp c lvmthin pve1 drbdpool vg_hdd/drbdthinpool


A breakdown of the above parameters:
-----------------------------------------------
sp (storage pool)
c (create)
pve1 (first proxmox node)
drbdpool (storage pool definition, we created this above)
lvmthin (use lvm thin driver)
vg_hdd (LVM Volume Group we created previously)
drbdthinpool (LVM thin pool we created previously)
------------------------------------------------

  • Repeat the above command to add SP for each individual PVE node on LINSTOR, i.e replace pve1 parameter with pve2 and pve3 respectively. When done it should look like below ...

root@pve1:~# linstor sp l
   ───────────────────────────────╮-----------------------------------
┊ StoragePool ┊ Node ┊ Driver        ┊ PoolName            ┊   Free ┊ SupportsSnapshots ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┊ drbdpool    ┊ pve1 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
┊ drbdpool    ┊ pve2 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
┊ drbdpool    ┊ pve3 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
╰─────────────────────────────────────────────────

  • At this point we should have finished configuring LINSTOR, so we can proceed configuring its Proxmox storage plugin. To do so, we must add the following lines in Proxmox storage configuration file /etc/pve/storage.cfg ...
drbd: drbdstorage
        content images,rootdir
        controller 192.168.198.20
        redundancy 3

A breakdown of the above parameters:
----------------------------------------------
drbd (a hardcoded by Proxmox, definition for DRBD storage type, this cannot be changed.)
drbdstorage (a arbitrary name selected to define this storage backend on PVE, this can be changed)
content images,rootdir (this storage type supports Qemu VM and LXC type containers)
controller (this defines which node acts as LINSTOR controller. It's mandatory to type that node's IP address correctly, otherwise Proxmox will fail to create VM/LXCs on DRBD).
redundancy 3 (How many replicas will be created for each VM/LXC, minimum should be 2)
-----------------------------------------------
  • By now we should be ready to go, so let's open PVE webgui management and create a Qemu VM or LXC container on DRBD9 storage...




  • You should be able to see also the associated LV Thin Volumes for each individual DRBD resource/VM...
  root@pve1:~# lvs
  LV                  VG     Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data                pve    twi-a-tz--  8.25g                     0.00   0.01
  root                pve    -wi-ao----  4.75g
  swap                pve    -wi-ao----  2.38g
  drbdthinpool        vg_hdd twi-aotz-- 29.96g                     22.32  17.44
  vm-100-disk-1_00000 vg_hdd Vwi-aotz-- 13.01g drbdthinpool        23.12
  vm-101-disk-1_00000 vg_hdd Vwi-aotz--  9.01g drbdthinpool        10.60
  vm-102-disk-1_00000 vg_hdd Vwi-aotz-- 10.00g drbdthinpool        27.22


  • If you have any problems, please read carefully DRBD9/LINSTOR documentation at Linbit's web site, to familiarise yourself with LINSTOR command line or subscribe to DRBD mailing list.




1 comment:

  1. Nice blog .....working for me

    Small correction:
    linstor sp c pve1 drbdpool lvmthin vg_hdd/drbdthinpool ---Incorrect Syntax

    linstor sp c lvmthin pve1 drbdpool vg_hdd/drbdthinpool ----Correct one

    Thanks


    ReplyDelete