Sunday, 29 July 2018

Make Windows Installer work under safe mode

To make Windows Installer work under safe mode, you need to create a registry entry for every type of safe mode you are logged in to.
  1. Safe Mode.
    Type this in a command prompt:
    REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal\MSIServer" /VE /T REG_SZ /F /D "Service"
    
    
    and then
    net start msiserver
    
    
    This will start the Windows Installer Service.
  2. Safe Mode with Network
    REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Network\MSIServer" /VE /T REG_SZ /F /D "Service"
    
    
    and followed by
    net start msiserver 
    
    
    This will start the Windows Installer Service.

How to enable ‘Out of Office’ in a Office 365 Distribution Group

Step-by-step guide

  1. 1. Connect to Exchange Online with Windows PowerShell.
    2. Run the cmdlet:
    Set-DistributionGroup -Identity  -SendOofMessageToOriginatorEnabled $trueNotice
  2. If it is a DDG, please use this command:
    Set-DynamicDistributionGroup -Identity -SendOofMessageToOriginatorEnabled $true
  3. Configure a user’s mailbox (which is a member of the DG) in your Outlook and set the OOOM.
  4. Send a test message to the DG to confirm.
  5. In case that Office 365 is syncing with on premises Active Directory then login on clients AD
  6. Locate the Distribution Group -> Properties -> Attribute Editor (Enable Advanced attributes first)
  7. Locate the filed ‘oOFReplyToOriginator‘ and set it to ‘TRUE
  8. Open Powershell and Sync with Office 365 :
  9. Import-Module DirSync
  10. Start-OnlineCoexistenceSync
  11. Configure a user’s mailbox (which is a member of the DG) on your Outlook and set the OOOM.
  12. Send a test message to the DG to confirm

Permanently set interface in promiscuous mode

The quickest and probably the easiest way to set an interface to promiscuous mode in Linux is to run the following command and add the same to the /etc/rc.local file to survive a reboot.
ifconfig eth0 up promisc
Another option to make it permanent is to edit and update the network configuration file namely /etc/network/interfaces
Remove the lines related to eth0 and update the file to look something like
auto eth0
iface eth0 inet manual
up ifconfig eth0 up promisc
down ifconfig eth0 down -promisc

Force manual sync with Office 365 via Powershell

Start the Azure Active Directory Powershell console
Once it has loaded run the following two commands:
  • Import-Module DirSync
  • Start-OnlineCoexistenceSync
The sync will start immediately.

Changing email password in Office 365

Go to https://login.microsoftonline.com/ and log in using your email address as your User ID, and your password for password.
  1. Once signed in, under Outlook click options
  2. On the right hand side there is a section called ‘Shortcuts to other things you can do’, click on ‘Change your password
  3. This will now take you to the section where you can change your password

Proxmox cluster | Reverse proxy with noVNC and SPICE support

I have a 3 node proxmox cluster in production and I was trying to find a way to centralize the webgui management.
Currently the only way to access proxmox cluster web interface is by connecting to each cluster node individually, e.g https://pve1:8006 , https://pve2:8006 etc from your web browser.
The disadvantage of this is that you have either to bookmark every single node on your web browser, or type the url manually each time.
Obviously this can become pretty annoying, especially as you are adding more nodes into the cluster.
Below I will show how I managed to access any of my PVE cluster nodes web interface by using a single dns/host name (e.g https://pve in my case).
Note that you don’t even need to type the default proxmox port (8006) after the hostname since Nginx will listen to default https port (443) and forward the request to the backend proxmox cluster nodes on port 8006.
My first target was the web management console and secondly it was making noVNC and SPICE work too. The last seemed to be more tricky.
We will use Nginx to handle Proxmox web and noVNC console traffic (port 8006) and HAProxy to handle SPICE traffic (port 3128).
Note The configuration below has been tested with the following software versions:
  • Debian GNU/Linux 8.6 (jessie)
  • nginx version: nginx/1.6.2
  • HA-Proxy version 1.5.8
  • proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
What you will need
1. A basic Linux vm. My preference for this tutorial was Debian Jessie.
2. Nginx + HAProxy for doing the magic.
3. OpenSSL packages to generate the self signed certificates.
4. Obviously a working proxmox cluster.
5. Since this will be a critical vm, It would be a good idea to configure it as a HA virtual machine into your proxmox cluster.
The steps
– Download Debian Jessie net-install.
– Assign a static IP address and create the appropriate DNS record on your DNS server (if available, otherwise use just hostnames).
In my case, I created an A record named ‘pve‘ which is pointing to 10.1.1.10 . That means that when you manage to complete this guide your will be able to access all proxmox nodes by using https://pve (or https://pve.domain.local) on your browser! You will not even need to type the default port which is 8006.
– Update package repositories by entering ‘apt-get update’
– Install Nginx and HAProxy:
apt-get install nginx && apt-get install haproxy
Nginx and OpenSSL setup
– Assuming that you are logged in as root, create backup copy of the default config file.
cp /etc/nginx/sites-enabled/default /root
– Remove /etc/nginx/sites-enabled/default:
rm /etc/nginx/sites-enabled/default
– Download OpenSSL packages:
apt-get install openssl
– Generate a private key (select a temp password when prompted):
openssl genrsa -des3 -out server.key 1024
– Generate a csr file (select the same temp password if prompted):
openssl req -new server.key -out server.csr
– Remove the password from the key:
openssl rsa -in server.key -out server_new.key
– Remove old private key and rename the new one:
rm server.key && mv server_new.key server.key
– Make sure only root has access to private key:
chown root server.key && chmod 600 server.key
– Generate a certificate:
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
– Create a directory called ssl in /etc/nginx folder and copy server.key and server.crt files:
mkdir /etc/nginx/ssl && cp server.key /etc/nginx/ssl && cp server.crt /etc/nginx/ssl
– Create an empty file:
vi /etc/nginx/sites-enabled/proxmox-gui
– Paste the code below and save the file. Make sure that you change the ip addresses to match your proxmox nodes ip addresses:
Edit (11-11-2017)
upstream proxmox {ip_hash;    #added ip hash algorithm for session persistencyserver 10.1.1.2:8006;
server 10.1.1.3:8006;
server 10.1.1.4:8006;
}
server {
listen 80 default_server;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
location / {
proxy_pass https://proxmox;
}
}
– Create a symlink for /etc/nginx/sites-enabled/proxmox-gui in /etc/nginx/sites-available:
ln -s /etc/nginx/sites-enabled/proxmox-gui /etc/nginx/sites-available
– Verify that the symlink has been created and it’s working:
ls -ltr /etc/nginx/sites-available && cat /etc/nginx/sites-available/proxmox-gui (You should see the above contents after this)
– That’s it! You can now start Nginx service:
systemctl start nginx.service && systemctl status nginx.service (Verify that it is active (running).
HAProxy Setup
– Create a backup copy of the default config file.
cp /etc/haproxy/haproxy.cfg /root
– Create an empty /etc/haproxy/haproxy.cfg file (or remove it’s contents):
vi /etc/haproxy/haproxy.cfg
– Paste the following code and save the file. Again make sure that you change the ip addresses to match your proxmox hosts. Also note that the hostnames must also match your pve hostnames, e.g pve1, pve2, pve3
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy/haproxy.sock mode 0644 uid 0 gid 107
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen proxmox_spice *:3128
mode tcp
option tcpka
balance roundrobin
server pve1 10.1.1.2:3128 weight 1
server pve2 10.1.1.3:3128 weight 1
server pve3 10.1.1.4:3128 weight 1
– Note that the above configuration has been tested on HA-Proxy version 1.5.8.
If the Nginx service fails to start please troubleshoot by running:
haproxy -f /etc/haproxy/haproxy.cfg ...and check for errors.
– Start HAProxy service:
systemctl start haproxy.service && systemctl status haproxy.service (Must show active and running)
Testing our setup…
Open a web browser and enter https://pve . You should be able to access PVE webgui. (remember in my case I have assigned ‘pve’ as hostname to the Debian VM and I have also created a similar entry on my DNS server. That means that your client machine must be able to resolve the above address properly otherwise it will fail to load proxmox webgui).
You can now also test noVNC console and SPICE. Please note that you may need to refresh noVNC window in order to see the vm screen.
UPDATE: You can seamesly add SSH to the proxied ports if you wish to ssh in any of pve host.
Just add the lines below to your /etc/haproxy/haproxy.cfg file. Note that I’m using port 222 instead of 22 in order to prevent conflicting ports with the actual Debian vm which already listens on port tcp 22.

listen proxmox_ssh *:222
mode tcp
option tcpka
balance roundrobin
server pve1 10.1.1.2:22 weight 1
server pve2 10.1.1.3:22 weight 1
server pve3 10.1.1.4:22 weight 1
Now if you try to connect from your machine as root@pve at port 222 (ssh root@pve -p 222), the first time you will be asked to save the ECDSA key of the host to your .ssh/known_hosts file and then you will login to the first proxmox node e.g pve1.
If you attempt to connect for a second time your request will be rejected since HAProxy will forward your request to the second proxmox node e.g pve2 which happens to have a different fingerprint from the first. This is good of course for security reasons but in this case we will need to disable the check for the proxied host, otherwise we will not be able to connect to it.
– On your client machine, modify /etc/ssh/ssh_config file (not sshd_config !).
– Remove the following entry:
Host *
– Add the following at the end of the file:
Host pve
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
ServerAliveInterval 5
This will prevent the security ECDSA key checks ONLY for host pve and enable them from ALL other hostnames. So in short it’s quite restrictive setting.ServerAliveInterval is used in order to keep the ssh session alive during periods of inactivity.I’ve noticed that without setting that parameter to ssh client, it will drop the session quite often.

Using DRBD block level replication on Windows

WDRBD or Windows DRBD

DRBD is a well known distributed replicated storage system for Linux. Recently a company has ported DRBD kernel driver and userspace utilities on Windows, so it’s now possible to setup DRBD resources on a Windows machine. DRBD is block level storage replication system  (similar to RAID-1) used on highly available storage setups. You can use both Desktop and Server Windows O/S, but it’s recommended  to use a Server version if this is intended for production use.

What you will need:
– 2 x Windows Server machines (Win2012 R2 in my case)
– DRBD binaries from here
– A dedicated volume (disk) to be replicated by DRBD. You can also use a NTFS volume, with existing data. You can use this method to replicate for example an existing Windows file server on a second Windows server. However, in this case you will need to resize (shrink) server’s partition in order to create a second, small partition needed for DRBD meta-data.
– Optionally a dedicated network for DRBD replication.

Configuration:

You must follow these steps on both nodes.

– Setup both Windows machines with static IP addresses. In my case I will use 10.10.10.1 for node1 and 10.10.10.2 for node2. Also provide a meaningful hostname on each server since you will need this during DRBD configuration. In my case node1: wdrbd1 and node2: wdrbd2 .
– Install DRBD binaries by double clicking on setup file and following the wizard. Finally reboot both servers.
– Navigate to “Program Files\drbd\etc” and “Program Files\drbd\etc\drbd.d”  folder and rename (or create a copy) the following files:

drbd.conf.sample –> drbd.conf
   global_common.conf.sample –> global_common.conf

(Note: For this test we do not need to modify the content of the above files. However it may be needed to do so in different scenarios.)

– Create a resource config file in “Program Files\drbd\etc\drbd.d”

r0.res (you can copy the contents from the existing sample config file)

A simple resource config file should look like this:



resource r0 {
      on wdrbd1 {
            device          e   minor 2;
            disk            e;
            meta-disk       f;
            address      10.10.10.1:7789;
      }

      on wdrbd2 {
              device        e   minor 2;
              disk          e;
              meta-disk     f;
              address       10.10.10.2:7789;
    }
}
“minor 2” means volume index number. (c: volume is minor 0, d: volume is minor 1, and e: is minor 2).

– Partition the hard drive for DRBD use. In my case I have a dedicated 40GB disk to be used for DRBD replication. I will use Disk Management to partition/format the hard drive.
I will need 2 partitions, 1st partition will be the data partition(device e above) and 2nd partition will be the meta-data partition(device f above). So let’s create the partition 1 and format it in NTFS.The size of this partition (e) in my case will be 39.68GB. The rest of free space will be dedicated for the meta-data partition (f), 200MB. To calculate the size of the meta-data properly please use the following link from Linbit DRBD documentation site.
The disk layout should look like this:
Please note that the data partition (E:) has a filesystem, NTFS,  but the meta-data partition (F:) does not, so it must be a RAW partition.

– Once finished with the above on both nodes, open a command prompt (as an Administrator)  and use the following commands to prepare DRBD:

 drbdadm create-md r0    (on each nodes)
Initial Sync
drbdadm up r0   (on node1)
drbdadm up r0   (on node2)
drbdadm status r0  (on node1)
You should see something like the following:

C:\Users\wdrbd>drbdadm status r0
  r0 role:Secondary
    disk:Inconsistent
    wdrbd2 role:Secondary
        peer-disk:Inconsistent
Initiate a full sync on node1:

drbdadm primary –force r0
After the sync is completed you should get the following:

C:\Users\wdrbd>drbdadm status r0
  r0 role:Primary
    disk:UpToDate
    wdrbd2 role:Secondary
          peer-disk:UpToDate
The disk state on both nodes should be in UpToDate state. As you can see the primary node in this case it’s node1. This means that node1 is the only node which can access the E: drive to read/write data into it. Remember that NTFS is not a clustered file system, meaning that it cannot be opened for read/write access concurrently on both nodes. Our DRBD configuration in our scenario prevents dual Primary mode in order to avoid corruption of the file system.

Switching the roles:

If you want to make node2 the Primary and node1 the Secondary, you can do so by doing the following (make sure there are no any active read/write sessions on node1 since DRBD will have to force close them):

On node1: drbdadm secondary r0
On node2: drbdadm primary r0
After issuing the above commands, you should be able to access the E: drive on node2 this time. Feel free to experiment and don’t forget to report any bugs to the project’s github web page!