Simplicity yields​ performance

A follow-up to the previous article about Create LVM thin provisioning with SSD cache. After a while with performance issues I have upgraded the mirror of two WD Black with a RAID5 of WD Red.

I had smartmontools tell me every now and then that a sector of one of the WD Black in the RAID1 mirror was bad. Often the system was slow and the CPU idling. What I have come up with is that the NVMe SSD throttled down when getting hot. Not good when used as a cache. Also a faulty drive is bound to cause issues.

I have now upgraded two WD Black to three WD Red, new chassis to be able to house them and I utilized my old 3Ware RAID controller.

I first went with LVM Thin Provisioning but when the volume has to grow all the time it was also really slow. I reinstalled with ordinary LVM. The VMs are still on a XFS volume as QCOW2 files but that is to be migrates. I attached an LVM volume to a VM and tested performance. I get near gigabit speeds over CIFS from that volume.

The 3Ware have a BBU cache so writes are fast too.

Group by if variable is true in Ansible

One way of selecting wether to apply a role or not in Ansible is to set variables like “webserver: True” or “mysql: True” on the hosts and then in playbook do

- { role: webserver, when: webserver is defined and webserver }

Another is to have the host member of groups and have plays like:

- hosts: webserver
  roles: webserver

I find the latter more flexible and to my liking. Sometimes you have both and need to bring them together. This can be done with the ansible action  group_by

- hosts: all
  connection: local
  gather_facts: false

tasks:
 - group_by:
 key: "{% if webserver is defined and webserver %}webserver{% endif %}"
 name: group_by(webserver)
 - group_by:
 key: "{% if mysql is defined and mysql %}mysql{% endif %}"
 name: group_by(mysql)

 

Get the latest upstream version of packages on CentOS

Sometimes the version of a package available on CentOS is too old and you really want the latest but you don’t want to manually update it by compiling from source and tracking the upstream. Fortunately there are people that does this for you. Head over to the IUS Community Project and install their RPMs.

Create LVM thin provisioning with SSD cache

Awesome LVM setup for my VMs

I recently built a new home server and I wanted to use LVM thin provisioning backed by a nvme cache. This is basicly just a snippet from my command history.

[Update: I later found out that libvirt doesn’t seem to support LVM thin provisioning. Dang it! Guess I’ll have to do just SSD caching then]

Setup

/dev/sda 2TB WDC Black harddrive
/dev/sdb 2TB WDC Black harddrive
/dev/nvme0n1 256GB Intel 600p SSD

A bunch of RAID1 devices on sda and sdb. Of concern here is:
/dev/md125  1.9TB RAID1 mirror of /dev/sda5 + /dev/sdb5

Machine is called virt1 and runs CentOS 7

Create LVM thin pool

Create Physical Volumes
pvcreate /dev/md125
pvcreate /dev/nvme0n1

Create Volume Group vg_virt1 on big RAID volume
vgcreate vg_virt1 /dev/md125

Create Thin Pool tp_vmpool on all available space in vg_virt
lvcreate -l100%FREE –thinpool tp_vmpool vg_virt1 –verbose

Volumes can now be created in vg_virt/tp_vmpool

Connect LVM Cache (dm-cache)

Extend Volume Group with SSD
vgextend vg_virt1 /dev/nvme0n1

Create a cache pool out of the free space
lvcreate –type cache -L 238G -n lv_virt1_cachepool /dev/vg_virt1/tp_vmpool /dev/nvme0n1

(the 238G was the effective space – 1% or something like that. It may work with -l100%FREE too)

Tunnel only specific applications through VPN

A mission of trance

Say you are on a mission. A mission to spread your trance music. So you create a software for this trance mission. And for fun you call it transmission. This software is run in the background. This is called daemonizing in Linux. So you obviously call the executable for this software transmission-daemon. And to keep it from taking over your system you have it running as a dedicated user called transmission.

Evil corp wants you dead

Not all people like your trance music and try to shut you down. So naturally you want to mask your transmission-daemon behind a VPN service. But to not reveal that it is you who are running the transmission you only want the transmission traffic to exit via the VPN tunnel and the rest of your traffic to exit via your normal way.

Our setup

In this example we are using Azire VPN provider. We have our transmission-daemon running on Fedora Linux 24. We will be using OpenVPN for the tunneling. So sign up for a VPN and get the ovpn-file.

Install all the things

# dnf install openvpn transmission-daemon

Yup that’s it

Configure OpenVPN

Copy your ovpn file to /etc/openvpn and name it AzireVPN-SE.conf. It is important that the file ends with .conf.

I added these lines to the end of that file

auth-user-pass /etc/openvpn/Azire.auth
route-nopull
script-security 2
up /etc/openvpn/up.sh
down /etc/openvpn/down.sh
inactive 300

I created a file with our username on the first line and our password on the second line. The file was named /etc/openvpn/Azire.auth. Remember to chown it to 0600.

I created /etc/openvpn/up.sh with the contents of this snippet.
Also a matching /etc/openvpn/down.sh with the contents of this snippet.

Configure services

We  want openvpn to start transmission-daemon when the tunnel is up. So disable the system service so it isn’t started when the machine starts.

# systemctl disable transmission-daemon.service

Create a AzireVPN-service by creating a special symlink of the openvpn unit file, reloading systemd and enabling the service

# cd /etc/systemd/system
# ln -s '/lib/systemd/system/openvpn@.service' \
  'openvpn@AzireVPN-SE.service'
# systemctl daemon-reload
# systemctl enable openvpn@AzireVPN-SE.service

You should now be able to start and stop your tunnel (and thus also transmission-daemon) with

systemctl start openvpn@AzireVPN-SE.service
systemctl stop openvpn@AzireVPN-SE.service

Keep the motor running

Sometimes the tunnel goes down and the transmission-daemon with it. You can have systemd restart it when that happens by editing /lib/systemd/system/openvpn@.service and adding the following line to the [Service] section

Restart=on-failure

Probably should be done with an override.

 

Happy trancing.