This was originally published as a Gist to help some peers, however since those are hard to discover I am reposting here.
The goal is to install Red Hat Virtualization onto a single host, for example a home lab. I’m doing this with Red Hat Enterprise Linux (7.7) and RHV, but will work just as well with CentOS and oVirt. If you’re using RHV, note that this is not supported in any way!
The Server
My home lab consists of a single server with adequate CPU and RAM to host a few things. I’m using an old, small HDD for the OS and a larger NVMe drive will be used for VMs. My host has only a single 1GbE link, which isn’t an issue since I’m not expecting to use remote storage and, being a lab, it’s ok if it doesn’t have network access due to a failure…it’s a lab!
You can use pretty much whatever hardware with whatever config you like for hosting your virtual machines. The only real requirement is about 6GB of RAM for the OS + hosted engine, but you’ll want more than that to host other VMs.
Install and configure
Before beginning, make sure you have forward and reverse DNS working for the hostnames and IPs you’re using for both the hypervisor host and RHV Manager. For example:
10.0.101.20 = rhvm.lab.com
10.0.101.21 = rhv01.lab.com
With that out of the way, let’s deploy and configure RHV!
- Install RHEL hypervisor OS
I’m using RHEL, not RHV-H, because it’s easier to manage and add pacakges (such as an NFS server). I’m going to assume that you’ve verified the CPU virtualization extensions, etc. have been enabled via BIOS/UEFI. If you’re feeling particularly paranoid, use the
virt-host-validate qemu
command to check that the host is configured for virtualization.If you are using a single, shared drive instead of separate drives for OS and VMs, I highly recommend allocating about 40GiB for the RHEL OS and reserving the remainder for VM storage domains. If you’re using some kind of hardware or software RAID for the OS drive, configure however you like.
After installing RHEL 7.7, register and attach it to the appropriate pool.
- Add the needed repos and update, install cockpit
Following the docs here, enable the repos and update the host.
1234567891011121314151617181920# enable the needed repossubscription-manager repos \--disable='*' \--enable=rhel-7-server-rpms \--enable=rhel-7-server-rhv-4-mgmt-agent-rpms \--enable=rhel-7-server-ansible-2-rpms# update everythingyum -y update# install cockpit with the various add-onsyum -y install cockpit-ovirt-dashboard# enable cockpit and open the firewallsystemctl enable cockpit.socketfirewall-cmd --permanent --add-service=cockpit# since a Kernel update probably got installed, reboot the host. If not,# start cockpit, reload the firewall, and skip this reboot.reboot - Configure host storage and management network
If you have a fancy storage setup for VM storage (RAID0,1,5,6,10; ZFS; whatever) now is the time to do it. Same for any network config (bond, etc.) needed for management (VM networks come later) that wasn’t done pre-install.
My host, with a separate NVMe drive for VMs, was configured using Cockpit. The drive was formatted for LVM, and in the VolGroup, I created a thin pool, which then has two (thin) volumes:
rhv_she
– 100GiBrhv_data
– remaining capacity
These volumes are formatted using XFS and mounted to
/mnt/rhv_she
and/mnt/rhv_data
respectively. Last, but not least, set permissions:12# set permissions for the vdsm user and kvm groupchown 36:36 /mnt/*I’m using NFS to create the illusion of shared storage, just in case I have a second+ host later.
1234567891011# create the exports file, substitute your subnet belowcat << EOF > /etc/exports/mnt/rhv_she 10.0.101.0/24(rw,async,no_root_squash)/mnt/rhv_data 10.0.101.0/24(rw,async,no_root_squash)EOF# enable the serversystemctl enable --now nfs-server# allow accessfirewall-cmd --permanent --add-service=nfsTest NFS:
12345mkdir /mnt/test && mount hostname_or_ip:/mnt/rhv_she /mnt/testdate > /mnt/test/can_touch_thisrm /mnt/test/*umount /mnt/testrmdir /mnt/test - Using Cockpit, deploy RHV Manager
Follow the docs here.
I assign 2 vCPUs and 4GiB RAM to the VM. It may complain. It’ll be fine.
Once ready, click the next button, it’ll prepare and stage some things, including downloading the Self-Hosted Engine (SHE) VM template. Note that this is a few GiB in size, so it may take a while if your internet is slow.
At some point, it will ask for the storage you want to use for SHE. Point it to the NFS export for
rhv_she
, e.g.10.0.101.21:/mnt/rhv_she
. The disk size should be pre-populated around 80GiB, I leave it at the default value since the underlying LVM volume is thin provisioned anyway. - Configure and update RHV Manager
Start by putting the HA cluster for SHE into maintenance mode. From the hypervisor node…
12# From the hypervisor node, set maintenance modehosted-engine --set-maintenance --mode=globalSSH to the RHV-M virtual machine and follow the docs.
1234567891011121314151617181920212223242526272829303132333435# ssh to the RHV-M / SHE virtual machinessh hostname_or_ip_of_hosted_engine# register and attachsubscription-manager registersubscription-manager attach --pool=blahblahblah# add the repossubscription-manager repos \--disable='*' \--enable=rhel-7-server-rpms \--enable=rhel-7-server-supplementary-rpms \--enable=rhel-7-server-rhv-4.3-manager-rpms \--enable=rhel-7-server-rhv-4-manager-tools-rpms \--enable=rhel-7-server-ansible-2-rpms \--enable=jb-eap-7.2-for-rhel-7-server-rpms# check for updatesengine-upgrade-check# assuming it returns positive (otherwise, stop here)yum -y update ovirt\*setup\* rh\*vm-setup-plugins# run engine-setup to update the system, more or less, accept the defaults (no# need to do backups of the databases) and let it do it's thingengine-setup# once done, update the reminaing OS packagesyum -y update# if you're planning on updating the hypervisor, shutdown RHV-Mshutdown -h now# if your not updating the hypervisor, reboot if a kernel update was applied#rebootAnd, finally, update the hypervisor.
123456789101112131415161718# make sure the RHV-M VM is downhosted-engine --vm-status# update packages in the normal wayyum -y update# rebootreboot# when the host comes back up, reconnect via ssh or console# the below command will take a few minutes to actually work. at first it will spit out# errors about how it can't connect to storage and to check a few services. You can# view the logs for them, etc., but...for me...it usually takes about 5 minutes# before it responds correctly (with a VM down message)hosted-engine --vm-status# once it's responding, restart RHV-Mhosted-engine --vm-startGive the RHV-M VM a minute or two to start up, then browse to the admin portal:
https://hostname.domainame/ovirt-engine/webadmin/
.Since there is only one node in the cluster and no chance for RHV-M HA, there’s no harm in leaving the SHE cluster perpetually in maintenance mode. If you feel the need, remove it from maintenance mode using the command
hosted-engine --set-maintenance --mode=none
from the hypervisor host. - Configure the RHV environment
At this point you should be logged into the RHV-M admin GUI interface and be greeted by the (mostly empty) dashboard. Your one host should be added to the default datacenter and you should have a storage domain (named whatever you specified during the install,
hosted_storage
by default).Let’s finish configuring the RHV deployment. At a minimum, this will mean…
- If needed, configure additional physical networks.
If you need to configure additional physical adapters (standalone or bonds) for VM, storage, live migration, etc., now is the time to do so. Browse to Compute -> Hosts and click on the name of the host, then select the “Network Interfaces” tab and, finally, the “Setup Host Networks” button in the upper right.
- If needed, configure additional logical networks.
A default
ovirtmgmt
network will have been created that is capable of placing VMs onto the same network as the management interface. If you need to add additional configuration (e.g. VLANs), browse to Network -> Networks and add them. Once the network(s) have been defined, browse to Compute -> Hosts, select the host (click the name to view details), and browse to the “Network Interfaces” tab. Click the “Setup Host Networks” button in the upper right to adjust the network config by drag+drop the logical network to the physical configuration. Once done, click ok to apply.Note that if you adjust the
ovirtmgmt
network, there may be some flakiness when applying multiple changes in one commit. Simply avoid adjusting it in conjunction with other changes. - Add the second storage domain.
Browse to Storage -> Domains, click the button for “New Domain” in the upper right. Fill in the details for an NFS domain (assuming you followed the instructions above) at
/mnt/rhv_data
. Give it a creative and descriptive name like “rhv_data” so you know it’s function! - Enable overcommit.
By default RHV won’t overcommit memory. To fix this, browse to Compute -> Cluster, highlight the cluster (
Default
, by default), and click the “Edit” button. Browse to the “Optimization” tab, then set “Memory Optimization” to your desired value. I also recommend enabling “Count threads as cores” and both “Enable memory balloon optimization” and “Enable KSM” (configured for “best KSM effectiveness”) on this same tab. - Optionally, remove Spectre/Meltdown protection.
You may want to remove the IBRS Spectre/Meltdown mitigations if you are willing to trade less security for more CPU performance. If so, browse to Compute -> Cluster, highlight the cluster (by default,
Default
), and click the “Edit” button in the upper right. On the general tab, for CPU type, choose the latest generation supported by your CPU which doesn’t haveIBRS SSBD
(for Intel) orIBPB SSBD
(for AMD) in it. - Verify there’s no conflicts with MAC address ranges.
If there is more than one RHV deployment on your network, verify that they aren’t using the same MAC address ranges for virtual machines. Browse to Administration -> Configure, then choose the “MAC Address Pools” tab. Click on the default pool and press the “Edit” button in the top of the modal. Check the range against any other instances and adjust if needed.
- If needed, configure additional physical networks.
Some closing thoughts
- Uploading ISOs / templates can be done via the GUI, but you’ll need to download the CA and trust it before it’ll succeed. To download the CA bundle, browse to
https://hostname.domainname/ovirt-engine/
and select “CA Certificate”, on the left side under “Downloads”. Once downloaded, add it to your keychain and trust it as needed.To upload an ISO, browse to Storage -> Disks, then choose Upload -> Start in the upper right corner. Click “Test Connection” in the lower part of the ensuing modal to verify that it will work. Assuming the test passed, choose the ISO and the storage domain you want it to land in, then click OK.
- Console access is, arguably, easier using noVNC vs SPICE with VirtViewer…and is definitely easier if the host is not directly accessible by the client. For each VM, after it’s powered on, highlight the VM in the Compute -> Virtual Machines view, then select the dropdown for “Console” in the upper right and choose the “Console Options”. Select the radio button for “VNC” at the top, then “noVNC” brlow. Click OK. When opening the console, it will now open in a new window/tab using the HTML5 noVNC client.
- Apply updates will require all of the VMs to be shutdown. It’s not required, but I find that it eliminates some issues with waiting for the storage domain during reboot if you put them in maintenance mode first. To do so, browse to the datacenter (Compute -> Datacenters), click the datacenter name, then in the Storage tab, highlight the storage domain and click the “Maintenance” button in the upper right.
After that, update RHV-M and the hypervisor OS just like is described in the step above.