An operational adventure...

Posted

In my lovely adventure to build a hosting service with a buddy, I’ve been researching various operating systems for a good virtualization platform. Of course, I’ve run across ESXi and XenServer (which are both awesome, mind you), but I’ve decided I want to go a different way with this – I want to use KVM. There have been some absolutely stellar benchmarks performed by awesome people that prove that KVM is so much more performant than XenServer and sits extremely close to bare-metal speeds.

Of course, choosing KVM means that there are a huge number of choices for the OS you can use to run as a virt host. My initial choice was Debian, but considering the Debian package index for the stable release doesn’t get updated frequently, I had to scratch that off the list.

Next up was Ubuntu. Logical next step, right? It’s a Debian-derivative with a thriving ecosystem full of virtualization, containerization, and updated packages! Woo-hoo! Ubuntu has a pretty kick-ass release cycle, considering LTS builds are supported for five years. But, I tend to shy away from Ubuntu just because even the most minimal server install feels pretty heavy.

So, I decided to expand my horizons. I looked into RHEL, Fedora, CentOS, FreeBSD, and others. FreeBSD is ruled out, since the kqemu-kmod package is broken in 10.0+ and most virt-related development is going toward Bhyve. RHEL is absolutely out because of per-socket pricing (\$). That leaves me with CentOS.

To be fair, I hadn’t used CentOS in a very long time. I’ve almost always stuck to Debian and its derivatives. But, in this case, I’m glad I switched over to the dark side…

CentOS is awesome so far. Considering Red Hat now owns KVM, CentOS is probably the perfect target for running a KVM virt stack. The installer is gorgeous, the system seems fairly minimal, and it comes pre-prepared with SELinux.

This brings us to the main point of this post: SELinux.

Running KVM in a SELinux contained environment is fun. You end up playing with a lot of semanage fcontext -a -t virt_image_t "/path/to/images(/.*)?" and restorecon -R /path/to/images. After some tinkering, I noticed that launching a VM with virt-install would almost work–then we’d end up with this:

Allocating 'debian-testing-.qcow2'                                       | 8.0 GB  00:00:01
ERROR    internal error: process exited while connecting to monitor: 2015-08-24T02:19:22.079170Z qemu-kvm: -drive file=/var/vmstorage/debian-testing-1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2: could not open disk image /var/vmstorage/debian-testing-1.qcow2: Could not open file: Permission denied


…and a whole lot of no idea why.

Now, my goal is to run this VM through a nice little laptop I’ve got setup for testing, but store the VM image on my NAS, so naturally, /var/vmstorage is a NFS mount. After some digging, I noticed this in the SELinux auditing logs:

type=AVC msg=audit(1440382762.074:17813): avc:  denied  { open } for  pid=26650 comm="qemu-kvm" path="/var/vmstorage/debian-testing-1.qcow2" dev="0:36" ino=51380227 scontext=system_u:system_r:svirt_t:s0:c441,c1018 tcontext=system_u:object_r:nfs_t:s0


To test, I did setenforce 0 and tried virt-install again–and it worked! Googling the error message (even with the necessary Google-Fu) didn’t yield useful results. After finding this article and running a quick getsebool -a, guess what’s right at the bottom?

virt_use_nfs --> off


sigh

After a quick setsebool virt_use_nfs on and setenforce 1, I was back in business with SELinux running and virt-install working.