Backups using Borg

I’m not going to lecture anyone about backups. I’m going to trust you already know you need backups and that you’re curious about how to set them up or you want to compare your setup to mine. Either way, you’re here now.

My setup is pretty straightforward. I repurposed an old NUC with an external SATA enclosure to act as the borg server. The external enclosure has four drives in a RAID-5, providing enough capacity for me to backup for a quite a while. In addition, I have the most important data replicated to object storage using rclone. This gives me multiple copies of data with multiple layers of protection, covering the 3-2-1 strategy for my most important data, and for the rest I am ok with the risk of loss.

Borg works off the principle of client-server, so the clients backup data to the “remote” borg server (it’s not actually remote, just at the other end of the house). If you aren’t familiar with borg, I highly encourage you to read the docs and have at least basic familiarity with it’s principle of operation.

My setup is loosely based on the configuration described here, so you may want to give that a read too if you like.

What and when to backup

I have a lot of things self-hosted in my home and on virtual machines in a couple of different services. I’m simply going to list a few of the things I backup, and leave you to evaluate what data you want backed up and where it’s at.

Read moreBackups using Borg

Private chat using Prosody

There are a seemingly infinite number of chat protocols, services, and various other ways to interact with other people. Wanting to try something new and experiment a bit, I decided to deploy a Prosody server to a DigitalOcean virtual machine. This turned out to be more time consuming than I expected, despite everyone extolling how “quick, simple, and easy” it is, largely because the docs are lackluster and examples are hard to come by.

I currently use this Prosody instance to receive alerts and messages from the various things running in my lab (e.g. alerts about power outages sent using ntfy), as well as chatting with my wife. Yes, I could use WhatsApp, Telegram, Slack, IRC, Twitter, Signal, SMS, Rocket.Chat, Matrix, RCS, or about a thousand other things, but what fun would that be?

Read morePrivate chat using Prosody

WireGuard VPN

I travel somewhat frequently for work and end up using untrusted WiFi as a result. Being privacy conscious, and the primary IT helpdesk for my family, means that I want/need secure access to my resources. The homelab is super useful for demonstrating functionality and concepts, so I find that I use it fairly often when doing presentations. All of this is leading up to a theoretically simple solution: a VPN.

For years I used OpenVPN, which is generally considered a very good solution for most situations, even being used by some enterprises. However, over time I found the performance to be lackluster at best and the amount of time to establish connections was irritating.

A few months ago I had some spare time while on a work trip, so I made the switch to WireGuard. Yes, there’s lots of paranoia about it being validated, etc., etc. but it’s good enough for me. The switch has been surprisingly easy, even allowing me to use my Pi-Hole VM as both the DNS/DHCP and VPN host while providing excellent performance with fewer resources. There are countless helper scripts and other self-hosted GUIs for WireGuard, but honestly with only a few clients I haven’t found the need to use one…adding a client takes about 60 seconds manually.

Read moreWireGuard VPN

My Pi-Hole configuration

Pi-Hole has been a staple of my homelab for several years now. Even if you don’t want to use the ad-blocking feature, the reporting and logging I find to be very helpful. Anyway, it’s one of my favorite projects and I highly encourage anyone and everyone to check it out!

The default configuration is very good, particularly if you want to simply block the majority of ads. I’m a bit over zealous, so I like to block ads, trackers, malware, and many other things. Additionally, I use Pi-Hole for DHCP on my network, having made the change when I moved from a pfSense router to a USG.

I’m going to assume you are able to follow the Pi-Hole deployment instructions and get everything up and running in the default configuration. I use a virtual machine (running CentOS 7.7) to host my instance, but you can use a RaspberryPi, almost any old hardware, even a container on an existing Linux machine if you’d like. I’ll also assume that you’re capable of updating your network and/or clients to use the Pi-Hole.

Read moreMy Pi-Hole configuration

Red Hat Virtualization All-in-One

This was originally published as a Gist to help some peers, however since those are hard to discover I am reposting here.

The goal is to install Red Hat Virtualization onto a single host, for example a home lab. I’m doing this with Red Hat Enterprise Linux (7.7) and RHV, but will work just as well with CentOS and oVirt. If you’re using RHV, note that this is not supported in any way!

The Server

My home lab consists of a single server with adequate CPU and RAM to host a few things. I’m using an old, small HDD for the OS and a larger NVMe drive will be used for VMs. My host has only a single 1GbE link, which isn’t an issue since I’m not expecting to use remote storage and, being a lab, it’s ok if it doesn’t have network access due to a failure…it’s a lab!

You can use pretty much whatever hardware with whatever config you like for hosting your virtual machines. The only real requirement is about 6GB of RAM for the OS + hosted engine, but you’ll want more than that to host other VMs.

Install and configure

Before beginning, make sure you have forward and reverse DNS working for the hostnames and IPs you’re using for both the hypervisor host and RHV Manager. For example:

  • 10.0.101.20 = rhvm.lab.com
  • 10.0.101.21 = rhv01.lab.com

With that out of the way, let’s deploy and configure RHV!

Read moreRed Hat Virtualization All-in-One

NetApp NFS Mount Access Denied By Server

mount.nfs: access denied by server while mounting

Just a quick tip today. While setting up a lab I had the need to mount a cDOT (8.3.0) export from behind a NAT gateway. When attempting the mount operation I got a relatively unhelpful error:

mount.nfs: access denied by server while mounting nfs.server.name:/mount/path

After some digging, I found that the cause of this is a setting on the storage virtual machine (a.k.a. SVM, formerly vserver). The problem is that by default cDOT expects that a port <= 1024 will be used for the mount operation. When NAT happens between you and the export, you are at the mercy of the gateway device for the port to be used. By setting the SVM NFS option mount-rootonly to disabled, this requirement is lifted.

To fix the problem from the cluster shell:

To fix the problem using the NetApp PowerShell toolkit:

Linux Console Scrolling

A simple, but extremely useful, tip…I didn’t know about it for a long time, but now that I do it’s quite helpful.

In most linux consoles, including RHEL and it’s derivatives, SUSE, and Ubuntu (these are the ones I’ve tried) you can scroll up and down to view the console history by holding the Shift key and using Page Up/Page Down.

Unfortunately, it does not work with the cDOT console.

Stupid Bash Tricks for SSH

My last post explained how to set up SSH key based authentication for connecting to a NetApp. If you have multiple/many systems to administer this makes it easy to quickly connect to and execute commands against your systems.

However, I’m lazy. I don’t want to type ssh some_system_name or ssh some.ip.add.ress for every system. Also, on some of my systems I have to specify the private key and username to use for connecting, which further lengthens the amount of typing I have to do: ssh -i ~/.ssh/some_special_id my_account@some.netapp.lan.

I have found it to be convenient and easy to create bash aliases for these systems. It’s simple to do:

Now, whenever I type na01 version it will automatically expand the “na01” to be the full command.

To make the alias permanent, add it to .bashrc file in your home directory…

If you are feeling particularly fancy, you can configure SSH for autocomplete of the hostnames also.

NetApp: Quick and dirty way to start the simulator at system startup

Being a primarily NetApp shop I do a fair amount of testing against their simulator before using any of the perl (and slowly PoSH) scripts against production systems. One of the things that I did a while ago was create a simple way of having the simulator(s) start when my virtual machine starts so that I don’t have to worry about logging in to start it.

NetApp’s documentation for the simulator states two ways of having it start when the server does: using screen to start it in the background, and the more “brute force” method of simply backgrounding the process when it’s started (by appending an ampersand to the end of the command). While both of these methods work, I wanted a way that I didn’t have to login to the system first in order to access the console of the simulator.

Read moreNetApp: Quick and dirty way to start the simulator at system startup

PXE Server Configuration Tutorial

Configuring a PXE server to present the files and information needed for kickstarting your ESX hosts isn’t too difficult a task. It does require some basic unix/linux knowledge, but aside from that, not too bad. I use a CentOS virtual machine with just 256 MB of RAM (you’ll need at least 512 for a GUI, but one isn’t necessary) to act as the PXE server for my ESX hosts. This same virtual machine also serves as a management point, as it has access to the management lan and with the perl toolkit and rCLI installed I can automate much of the work I need to accomplish with the hosts.

I happen to segregate the different types of traffic on the ESX hosts onto different VLANs. This means management (COS/PXE), VMotion, IP Storage, and virtual machine traffic (usually several VLANs by itself) are all separate. It is important that the server (or virtual machine) that you are using is configured with at least one interface on the same VLAN/network that the ESX management network is on. That interface will also need to have a static IP address.

It is also important that DHCP is able to function on this network when the host is in a totally unconfigured state. This means if you are trunking to your ESX hosts you must have the native VLAN set to the same as your management VLAN and port channeling (802.11q / LACP) can not be turned on during the PXE process.

Read morePXE Server Configuration Tutorial