Monday 30 July 2018

Issues booting to CentOS7 after resizing its root/swap LV

I recently faced the following strange issue...

Some background first

I installed  CentOS7 on a VM, its assigned vhd size was 10G, nothing fancy so far ...
Installation was successful and I was able to boot into the O/S without issues.

Later, at some point, I decided to increase vhd size from 10G to 13GB. To accomplish that I followed the steps below...

- Turned off the VM
- Resized the vhd to the new size
- Turned on the VM
- Booted from LiveCD [CentOS7 dvd]

  •     lsblk    

    [root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   13G  0 disk
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 12.5G  0 part
  ├─centos-root 253:0    0 11.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]  
     

  •        fdisk /dev/sda

    Here, I deleted the second partition (/dev/sda2), which is the LVM partition where centos/root and centos/swap LVs reside on. Then, I recreated it, by making sure that start sector was the same as before deleting the partition. Obviously, the end sector will be different as we increased vhd size. Finally, set partition type to 8e (LVM), saved settings and exited. By the way this is an 'extended' partition type.

  •        pvresize /dev/sda2        


  •        lvresize -l +100%FREE centos/root

     
       [root@localhost ~]# lvdisplay centos/root
  --- Logical volume ---
  LV Path                /dev/centos/root
  LV Name                root
  VG Name                centos
  LV UUID                1rlD1f-BSw4-R3rl-z22I-GtDN-rykj-cxdlXL
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2018-07-27 23:06:39 +0100
  LV Status              available
  # open                 1
  LV Size                <11 .52="" font="" gib="">
  Current LE             2948
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

     
So far, so good...centos/root LV , which is the backing device for root file system (/), has its size increased.


  •      mkdir /mnt/root   #create a temp directory to mount centos/root LV
  •      mount /dev/mapper/centos-root /mnt/root   #mount centos/root to /mnt/root
  •      xfs_growfs  /mnt/root   #increase root fs filesystem to match centos/root LV size. Did I mention that root fs is XFS ?
  •      umount /mnt/root   #unmount root fs 
  •      reboot   #reboot the system, and boot from hdd this time [not LiveCD]


The issue

After rebooting from LiveCD to the normal hard drive (now 13G), the following happened ...

     - GRUB menu appeared and loaded default kernel (3.10.0-862.9.1.el7.x86_64).
     - kernel loads and initial boot sequence starts.
     - kernel hands over to initramfs (initrd).
     - initrd fails to detect/active centos/root and centos/swap LVs, so after a while I'm being dropped to dracut shell...
      In dracut shell, I execute the following commands:
           - lvm lvchange -ay centos/root   #success
           - lvm lvchange -ay centos/swap  #success
           - ln -s /dev/mapper/centos-root /dev/root   #required for boot process to continue to the real root fs (/).
           - exit  #exit dracut shell

At this point boot process continues and finally I can login to the normal CentOS7 installation. But the question remains, why I have to do all these stuff? wasn't this supposed to happen automatically ? The answer is, yes, it should,  ....but something went wrong (obviously)... and now it needs my manual intervention to succeed.

The solution

After 5 days of continuous search, finally a solution was found. The problem was not in initramfs, but on the fact that /dev/sda2 partition(PV),  after I resized it, somehow got 2  partition table signatures.
One was set as a "dos" partition with an offset "0x1fe" and second was set as "LVM_member" with an offset "0x8e".
This confused blkid in initramfs, during the initial boot stage, thinking that /dev/sda2 is *not* an LVM_member but rather a simple "dos" partition, hence refused to activate centos/root and centos/swap LVs which were required to boot the machine to the O/S.
What's interesting is that neither fdisk or parted were showing this second signature, however there was a trace of it in the dracut report file (/run/initramfs/rdsosreport.txt), specifically the one below..

+ blkid
/dev/sr0: UUID="2016-10-28-12-18-36-00" LABEL="CentOS 7 x86_64" TYPE="iso9660" PTTYPE="dos" 
/dev/sda1: UUID="a620a180-3a8c-4b5f-ad30-804f131a7261" TYPE="xfs" 
/dev/sda2: PTTYPE="dos" 
+ blkid -o udev
ID_FS_UUID=2016-10-28-12-18-36-00
ID_FS_UUID_ENC=2016-10-28-12-18-36-00
ID_FS_LABEL=CentOS_7_x86_64
ID_FS_LABEL_ENC=CentOS\x207\x20x86_64
ID_FS_TYPE=iso9660
ID_PART_TABLE_TYPE=dos
To fix this problem, I booted the system from a livecd and used "wipefs" utility to erase the problematic signature "dos" and leave only the correct one "LVM_member".


The exact command was: 'wipefs -o 0x1fe -ff /dev/sda2' 

Then a reboot and voila system booted without issues ...



Sunday 29 July 2018

Configuring LINSTOR [DRBD9] storage plugin for Proxmox VE 5.x

DRBD9 is an opensource block level, replicated distributed storage system [link]. It can replicate the available local storage space of an existing computer node to one or more other nodes. It works similarly to the way RAID1 works on RAID controllers, but DRBD uses the network to replicate the data. DRBD9 operates at the block level, so it should perform faster on some workloads, as virtualisation for example, compared to other file system based distributed storage systems.

  • The main difference between DRBD8 and DRBD9, is that the former is limited to only two storage nodes. However DRBD8 is still heavily used in the industry, perhaps more than DBRD9, at least for now ...


In this article we'll see how we can configure DRBD9 management system, called LINSTOR, to work with Proxmox VE, a well known opensource hypervisor. We assume that you already have a working Proxmox cluster (at least 3 nodes).

The next step will be to configure Linbit's (the company behind DRBD) repository on Proxmox, to download the necessary packages, required for DRBD9/Linstor to work. Below follows a breakdown of these components:


  • DRBD9 kernel module
  • DRBD low-level admin tools
  • LINSTOR management tools
  • LINSTOR storage plugin for Proxmox

On all Proxmox nodes, install Linbit repository, make sure you modify PVERS variable value to match your PVE version (here is 5).

# wget -O- https://packages.linbit.com/package-signing-pubkey.asc | apt-key add -
# PVERS=5 && echo "deb http://packages.linbit.com/proxmox/ proxmox-$PVERS drbd-9.0" \ > /etc/apt/sources.list.d/linbit.list
 
# apt-get update && apt-get install linstor-proxmox
This will install the DRBD9 Proxmox storage plugin only. For an up to date guide about this plugin, please follow this link. To install the rest of DRBD9 components follow the steps below...

#apt install pve-headers
#apt install drbd-dkms drbd-utils
#rmmod drbd; modprobe drbd
#grep -q drbd /etc/modules || echo "drbd" >> /etc/modules


  • Important! Kernel headers must be installed on each node, prior to installing drbd-dkms package, otherwise it will fail to build.


The next step is to install LINSTOR satellite and controller components. A satellite is a node which provides access on the low level storage system (LVM,ZFS) to DRBD9. A controller is a node which orchestrates satellite nodes and manages other things like resource assignments, volume management etc. A node can be a satellite and a controller at the same time.

  • Important! There must be only one controller node active on a DRBD9 cluster.
  • You must install packages below to all Proxmox nodes, with the exception of controller, which must be installed only on one of the Proxmox nodes.


#apt install linstor-controller linstor-satellite linstor-client


  • Enable and start the satellite service on each node.

#systemctl start linstor-satellite
#systemctl enable linstor-satellite


  • Now, you must decide which node will be the "controller" node. Once you decide install linstor-controller package on it and start the service.

    [Update] It's now supported to deploy Linstor Controller as a separate HA VM  within PVE cluster. For more on this, please check latest Linstor documentation.
#systemctl enable linstor-controller
#systemctl start linstor-controller

  • Verify that the services are up and running on all nodes.

#systemctl status linstor-satellite
#systemctl status linstor-controller    #only on controller node.

If the above were successful, we can now proceed further to configure the backing storage system for LINSTOR. In my example, I'm using LVM thin as a storage provider. You can also use ZFS if you wish to. LVM can be used to manage a hardware RAID array, MD Raid or a single disk (only for testing). So, at the high level, the storage configuration will be the following ...

(RAID ARRAY)--- <-->(LVM)---<--->(DRBD9)---<--->(VM)
  • Suppose we have a storage array with a device assignment as /dev/sdb, let's configure LVM Thin on it..
#vgcreate vg_hdd /dev/sdb


# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda3  pve    lvm2 a--  19.75g 2.38g
  /dev/sdb   vg_hdd lvm2 a--  30.00g    0

# vgs
VG     #PV #LV #SN Attr   VSize  VFree
pve      1   3   0 wz--n- 19.75g 2.38g
vg_hdd   1   4   0 wz--n- 30.00g    0

We've created a volume group named vg_hdd which is using /dev/sdb [RAID array], for storage space provider.

  • Create a LVM ThinPool on this Volume Group. This ThinPool will be called "drbdthinpool" and it will be later used by LINSTOR to allocate LVs for the DRBD resources, which in their turn will be used as VM virtual hard disks.

#lvcreate -L 29G -T vg_hdd/drbdthinpool

# lvs
  LV                  VG     Attr       LSize  Po
  data                pve    twi-a-tz--  8.25g
  root                pve    -wi-ao----  4.75g
  swap                pve    -wi-ao----  2.38g
  drbdthinpool        vg_hdd twi-aotz-- 29.96g

  • Important! You must create drbdthinpool on all satellite nodes.
Now that we configured drbdthinpool on all (3) PVE nodes, we can proceed  configuring LINSTOR for using this pool to allocate LVs for the DRBD resources.

  • Important! All LINSTOR commands below must be executed on the LINSTOR controller node. First check the nodes status.

#linstor n l    #show LINSTOR nodes and their status.

╭───────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ IPs            ┊ State  ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pve1 ┊ COMBINED ┊ 192.168.198.20 ┊ Online ┊
┊ pve2 ┊ COMBINED ┊ 192.168.198.21 ┊ Online ┊
┊ pve3 ┊ COMBINED ┊ 192.168.198.22 ┊ Online ┊
╰───────────────────────────────────────────╯

  • Then, create a Storage Pool Definition (SPD) on LINSTOR.

#linstor spd c drbdpool   #drbdpool will be the name of the SPD.

  • Next, create the Storage Pools (SP) for each PVE nodes on LINSTOR.
# linstor sp c lvmthin pve1 drbdpool vg_hdd/drbdthinpool


A breakdown of the above parameters:
-----------------------------------------------
sp (storage pool)
c (create)
pve1 (first proxmox node)
drbdpool (storage pool definition, we created this above)
lvmthin (use lvm thin driver)
vg_hdd (LVM Volume Group we created previously)
drbdthinpool (LVM thin pool we created previously)
------------------------------------------------

  • Repeat the above command to add SP for each individual PVE node on LINSTOR, i.e replace pve1 parameter with pve2 and pve3 respectively. When done it should look like below ...

root@pve1:~# linstor sp l
   ───────────────────────────────╮-----------------------------------
┊ StoragePool ┊ Node ┊ Driver        ┊ PoolName            ┊   Free ┊ SupportsSnapshots ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
┊ drbdpool    ┊ pve1 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
┊ drbdpool    ┊ pve2 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
┊ drbdpool    ┊ pve3 ┊ LvmThinDriver ┊ vg_hdd/drbdthinpool ┊ (thin) ┊ true              ┊
╰─────────────────────────────────────────────────

  • At this point we should have finished configuring LINSTOR, so we can proceed configuring its Proxmox storage plugin. To do so, we must add the following lines in Proxmox storage configuration file /etc/pve/storage.cfg ...
drbd: drbdstorage
        content images,rootdir
        controller 192.168.198.20
        redundancy 3

A breakdown of the above parameters:
----------------------------------------------
drbd (a hardcoded by Proxmox, definition for DRBD storage type, this cannot be changed.)
drbdstorage (a arbitrary name selected to define this storage backend on PVE, this can be changed)
content images,rootdir (this storage type supports Qemu VM and LXC type containers)
controller (this defines which node acts as LINSTOR controller. It's mandatory to type that node's IP address correctly, otherwise Proxmox will fail to create VM/LXCs on DRBD).
redundancy 3 (How many replicas will be created for each VM/LXC, minimum should be 2)
-----------------------------------------------
  • By now we should be ready to go, so let's open PVE webgui management and create a Qemu VM or LXC container on DRBD9 storage...




  • You should be able to see also the associated LV Thin Volumes for each individual DRBD resource/VM...
  root@pve1:~# lvs
  LV                  VG     Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data                pve    twi-a-tz--  8.25g                     0.00   0.01
  root                pve    -wi-ao----  4.75g
  swap                pve    -wi-ao----  2.38g
  drbdthinpool        vg_hdd twi-aotz-- 29.96g                     22.32  17.44
  vm-100-disk-1_00000 vg_hdd Vwi-aotz-- 13.01g drbdthinpool        23.12
  vm-101-disk-1_00000 vg_hdd Vwi-aotz--  9.01g drbdthinpool        10.60
  vm-102-disk-1_00000 vg_hdd Vwi-aotz-- 10.00g drbdthinpool        27.22


  • If you have any problems, please read carefully DRBD9/LINSTOR documentation at Linbit's web site, to familiarise yourself with LINSTOR command line or subscribe to DRBD mailing list.




Troubleshooting mirrored dynamic disks on Windows Server 2008 R2

Problem Description:
You have a Windows Server 2003 or 2008 (R2) Server configured with mirrored dynamic disks. One of disks fails and you need to replace failed disk with a new one. You buy the new disk, connect it to the slot where the failed disk was and power on your server. You go to disk management convert the new disk to dynamic disk and remove old (missing mirror). Now you are ready to create the mirror again but suddenly you get the following error message “All disks holding extents for a given volume must have the same sector size
As described here , what this message says is that your old disk in the mirrored array (500gb dell  in my case) and the new disk (1 tb wd black in my case) have different  sector sizes (512 bytes for 500gb disk and 4096 bytes for 1 tb disk).The new 1 tb disks was an Advanced Formatted Disk, as a result I cannot create a mirrored volume between them.I need to buy another 1 tb 4K formatted hdd , clone or create backup image & restore the old (500gb) disk to the new disk and finally create the mirrored array between 1 tb disks.
Ok, so I bought another 1 tb wd black and I was ready to clone the data from the old disk to the new one. The 1st problem that I faced is that most of Imaging programs cannot handle Dynamic Disks at all. For example Acronis, needs a special add-on called “Plus Pack” to be able to create an Image of dynamic disk (you must break the mirror first, because mirrored volumes are not supported) and then restore it to the new one.
Solution:
Personally I followed the steps below:
  1. Downloaded Hiren’s boot cd and used Norton Ghost to clone 500gb hdd -> 1tb hdd.
  2. After cloning I had  a BASIC 1 tb with all the data inside. Removed 500gb disk and left only this 1tb disk on slot and booted the machine.
  3. Got “0xc000000e Info: The boot selection failed because a required device is inaccessible”.
  4. I put Windows Server 2008 R2 dvd on dvd-rw and booted from there. Fortunately boot cd found the bootloader problems and automatically fixed them. If that did’t worked for you, try this article.
  5. Rebooted again and voila, finally the system booted correctly!
  6. Powered off the server, put the 2nd 1tb hdd, powered on, converted both disks to dynamic disks and finally enabled mirroring between them.

Make Windows Installer work under safe mode

To make Windows Installer work under safe mode, you need to create a registry entry for every type of safe mode you are logged in to.
  1. Safe Mode.
    Type this in a command prompt:
    REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal\MSIServer" /VE /T REG_SZ /F /D "Service"
    
    
    and then
    net start msiserver
    
    
    This will start the Windows Installer Service.
  2. Safe Mode with Network
    REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Network\MSIServer" /VE /T REG_SZ /F /D "Service"
    
    
    and followed by
    net start msiserver 
    
    
    This will start the Windows Installer Service.

How to enable ‘Out of Office’ in a Office 365 Distribution Group

Step-by-step guide

  1. 1. Connect to Exchange Online with Windows PowerShell.
    2. Run the cmdlet:
    Set-DistributionGroup -Identity  -SendOofMessageToOriginatorEnabled $trueNotice
  2. If it is a DDG, please use this command:
    Set-DynamicDistributionGroup -Identity -SendOofMessageToOriginatorEnabled $true
  3. Configure a user’s mailbox (which is a member of the DG) in your Outlook and set the OOOM.
  4. Send a test message to the DG to confirm.
  5. In case that Office 365 is syncing with on premises Active Directory then login on clients AD
  6. Locate the Distribution Group -> Properties -> Attribute Editor (Enable Advanced attributes first)
  7. Locate the filed ‘oOFReplyToOriginator‘ and set it to ‘TRUE
  8. Open Powershell and Sync with Office 365 :
  9. Import-Module DirSync
  10. Start-OnlineCoexistenceSync
  11. Configure a user’s mailbox (which is a member of the DG) on your Outlook and set the OOOM.
  12. Send a test message to the DG to confirm

Permanently set interface in promiscuous mode

The quickest and probably the easiest way to set an interface to promiscuous mode in Linux is to run the following command and add the same to the /etc/rc.local file to survive a reboot.
ifconfig eth0 up promisc
Another option to make it permanent is to edit and update the network configuration file namely /etc/network/interfaces
Remove the lines related to eth0 and update the file to look something like
auto eth0
iface eth0 inet manual
up ifconfig eth0 up promisc
down ifconfig eth0 down -promisc

Force manual sync with Office 365 via Powershell

Start the Azure Active Directory Powershell console
Once it has loaded run the following two commands:
  • Import-Module DirSync
  • Start-OnlineCoexistenceSync
The sync will start immediately.

Changing email password in Office 365

Go to https://login.microsoftonline.com/ and log in using your email address as your User ID, and your password for password.
  1. Once signed in, under Outlook click options
  2. On the right hand side there is a section called ‘Shortcuts to other things you can do’, click on ‘Change your password
  3. This will now take you to the section where you can change your password

Proxmox cluster | Reverse proxy with noVNC and SPICE support

I have a 3 node proxmox cluster in production and I was trying to find a way to centralize the webgui management.
Currently the only way to access proxmox cluster web interface is by connecting to each cluster node individually, e.g https://pve1:8006 , https://pve2:8006 etc from your web browser.
The disadvantage of this is that you have either to bookmark every single node on your web browser, or type the url manually each time.
Obviously this can become pretty annoying, especially as you are adding more nodes into the cluster.
Below I will show how I managed to access any of my PVE cluster nodes web interface by using a single dns/host name (e.g https://pve in my case).
Note that you don’t even need to type the default proxmox port (8006) after the hostname since Nginx will listen to default https port (443) and forward the request to the backend proxmox cluster nodes on port 8006.
My first target was the web management console and secondly it was making noVNC and SPICE work too. The last seemed to be more tricky.
We will use Nginx to handle Proxmox web and noVNC console traffic (port 8006) and HAProxy to handle SPICE traffic (port 3128).
Note The configuration below has been tested with the following software versions:
  • Debian GNU/Linux 8.6 (jessie)
  • nginx version: nginx/1.6.2
  • HA-Proxy version 1.5.8
  • proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
What you will need
1. A basic Linux vm. My preference for this tutorial was Debian Jessie.
2. Nginx + HAProxy for doing the magic.
3. OpenSSL packages to generate the self signed certificates.
4. Obviously a working proxmox cluster.
5. Since this will be a critical vm, It would be a good idea to configure it as a HA virtual machine into your proxmox cluster.
The steps
– Download Debian Jessie net-install.
– Assign a static IP address and create the appropriate DNS record on your DNS server (if available, otherwise use just hostnames).
In my case, I created an A record named ‘pve‘ which is pointing to 10.1.1.10 . That means that when you manage to complete this guide your will be able to access all proxmox nodes by using https://pve (or https://pve.domain.local) on your browser! You will not even need to type the default port which is 8006.
– Update package repositories by entering ‘apt-get update’
– Install Nginx and HAProxy:
apt-get install nginx && apt-get install haproxy
Nginx and OpenSSL setup
– Assuming that you are logged in as root, create backup copy of the default config file.
cp /etc/nginx/sites-enabled/default /root
– Remove /etc/nginx/sites-enabled/default:
rm /etc/nginx/sites-enabled/default
– Download OpenSSL packages:
apt-get install openssl
– Generate a private key (select a temp password when prompted):
openssl genrsa -des3 -out server.key 1024
– Generate a csr file (select the same temp password if prompted):
openssl req -new server.key -out server.csr
– Remove the password from the key:
openssl rsa -in server.key -out server_new.key
– Remove old private key and rename the new one:
rm server.key && mv server_new.key server.key
– Make sure only root has access to private key:
chown root server.key && chmod 600 server.key
– Generate a certificate:
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
– Create a directory called ssl in /etc/nginx folder and copy server.key and server.crt files:
mkdir /etc/nginx/ssl && cp server.key /etc/nginx/ssl && cp server.crt /etc/nginx/ssl
– Create an empty file:
vi /etc/nginx/sites-enabled/proxmox-gui
– Paste the code below and save the file. Make sure that you change the ip addresses to match your proxmox nodes ip addresses:
Edit (11-11-2017)
upstream proxmox {ip_hash;    #added ip hash algorithm for session persistencyserver 10.1.1.2:8006;
server 10.1.1.3:8006;
server 10.1.1.4:8006;
}
server {
listen 80 default_server;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
location / {
proxy_pass https://proxmox;
}
}
– Create a symlink for /etc/nginx/sites-enabled/proxmox-gui in /etc/nginx/sites-available:
ln -s /etc/nginx/sites-enabled/proxmox-gui /etc/nginx/sites-available
– Verify that the symlink has been created and it’s working:
ls -ltr /etc/nginx/sites-available && cat /etc/nginx/sites-available/proxmox-gui (You should see the above contents after this)
– That’s it! You can now start Nginx service:
systemctl start nginx.service && systemctl status nginx.service (Verify that it is active (running).
HAProxy Setup
– Create a backup copy of the default config file.
cp /etc/haproxy/haproxy.cfg /root
– Create an empty /etc/haproxy/haproxy.cfg file (or remove it’s contents):
vi /etc/haproxy/haproxy.cfg
– Paste the following code and save the file. Again make sure that you change the ip addresses to match your proxmox hosts. Also note that the hostnames must also match your pve hostnames, e.g pve1, pve2, pve3
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy/haproxy.sock mode 0644 uid 0 gid 107
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen proxmox_spice *:3128
mode tcp
option tcpka
balance roundrobin
server pve1 10.1.1.2:3128 weight 1
server pve2 10.1.1.3:3128 weight 1
server pve3 10.1.1.4:3128 weight 1
– Note that the above configuration has been tested on HA-Proxy version 1.5.8.
If the Nginx service fails to start please troubleshoot by running:
haproxy -f /etc/haproxy/haproxy.cfg ...and check for errors.
– Start HAProxy service:
systemctl start haproxy.service && systemctl status haproxy.service (Must show active and running)
Testing our setup…
Open a web browser and enter https://pve . You should be able to access PVE webgui. (remember in my case I have assigned ‘pve’ as hostname to the Debian VM and I have also created a similar entry on my DNS server. That means that your client machine must be able to resolve the above address properly otherwise it will fail to load proxmox webgui).
You can now also test noVNC console and SPICE. Please note that you may need to refresh noVNC window in order to see the vm screen.
UPDATE: You can seamesly add SSH to the proxied ports if you wish to ssh in any of pve host.
Just add the lines below to your /etc/haproxy/haproxy.cfg file. Note that I’m using port 222 instead of 22 in order to prevent conflicting ports with the actual Debian vm which already listens on port tcp 22.

listen proxmox_ssh *:222
mode tcp
option tcpka
balance roundrobin
server pve1 10.1.1.2:22 weight 1
server pve2 10.1.1.3:22 weight 1
server pve3 10.1.1.4:22 weight 1
Now if you try to connect from your machine as root@pve at port 222 (ssh root@pve -p 222), the first time you will be asked to save the ECDSA key of the host to your .ssh/known_hosts file and then you will login to the first proxmox node e.g pve1.
If you attempt to connect for a second time your request will be rejected since HAProxy will forward your request to the second proxmox node e.g pve2 which happens to have a different fingerprint from the first. This is good of course for security reasons but in this case we will need to disable the check for the proxied host, otherwise we will not be able to connect to it.
– On your client machine, modify /etc/ssh/ssh_config file (not sshd_config !).
– Remove the following entry:
Host *
– Add the following at the end of the file:
Host pve
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
ServerAliveInterval 5
This will prevent the security ECDSA key checks ONLY for host pve and enable them from ALL other hostnames. So in short it’s quite restrictive setting.ServerAliveInterval is used in order to keep the ssh session alive during periods of inactivity.I’ve noticed that without setting that parameter to ssh client, it will drop the session quite often.