Installing VMware vRO on a Scale Computing Cluster

Hopefully you’ve had time to read my blog on getting VMware vRealize Operations Manager (vRO) running on Ravello Systems cloud platform.

Since I recently took delivery of a Scale Computing cluster (kindly gifted to me by the lovely people at Scale, thanks @bocanuts) and I’ve been playing with image installs and migrations, I thought I’d try running vRO on Scale.

This was not as simple as on Ravello, but it really wasn’t that hard, either.

Converting the Image

Ravello can import a bunch of VMDK files and create a new VM image from them, but Scale Computing only support importing its own format of VM archive unfortunately (for now anyway). However, this format isn’t that complex, and using some freely available tools, I was able to convert the VMware OVA image for vRO into something the Scale cluster will accept.

To use image import/export with Scale, the cluster needs access to some sort of SMB fileshare. I have Samba running on my desktop Ubuntu system, so that’s what I used.

Screenshot from 2015-07-04 13:12:05

I created a VM with three 1GB disks to serve as a template image when exported. Make sure you use the disk type IDE and not VIRTIO for the disks, because the vRO OVA image expects the drives to appear as /dev/sdx, and doesn’t have virtio drivers loaded into the initrd boot image. I spent a bunch of time down a virtio rabbithole, and while you could probably get it to work, you have to muck about with a lot of configuration files and setup scripts. It’s just not worth it.

I used the ‘export VM’ function on the Scale interface to export the VM to my fileshare as ‘Empty01’, and then I had a look at what was exported and did a little light reverse engineering.

Screenshot from 2015-07-04 13:13:14

The Scale export format is a directory on the fileshare named after the VM, and it contains an XML file called <vmname>.xml which contains the VM definition using some sort of proprietary XML schema. It’s easy enough to read, and the important parts for our purposes are the <disk/> sections. The contain a line like this:

<source protocol="scribe" name="scribe/f05f3e37-e41a-4ad3-8f26-ad43345d1213"/>

which tells us that SCRIBE (the Scale Computing Reliable Intelligent Block Engine) has an object with this UUID. That’s our virtual disk.

Also in the directory, and corresponding to the UUIDs from the <disk/> entries in the XML file, are three QCOW2 format disk image files. To get our vRO VMDKs to import into the Scale, we should just be able to replace these disk image files with QCOW2 versions of the vRO VMDKs. Ah, but how do we get those?

QEMU to the rescue!

QEMU is an open source machine emulator and virtualiser. Because I run Ubuntu Linux as my desktop, getting the tools I needed was simple as:

sudo apt-get install qemu-utils

and then I could use the qemu-img tool to do what we need. Here’s how to convert a file from VMDK format into a QCOW2 format usable by the Scale KVM based hypervisor:

qemu-img convert -O qcow2 -o compat=0.10 <srcfile> <dstfile>

The compat=0.10 part is there because it looks like the current version of qemu-img on my system (2.0.0+dfsg-2ubuntu1.14) actually builds QCOW version 3 files by default when you put -O qcow2, not version 2, and the file command (and the Scale cluster) doesn’t recognise them. No, I have no idea why this is, but compat=0.10 fixes it.

We’ll need to run this three times, once on each of the VMDK files we extracted from the vRO tar format OVA file.

qemu-img convert -O qcow2 -o compat=0.10 vRealize-Operations-Manager-Appliance-6.0.2.2777062-system.vmdk vRealize-Operations-Manager-Appliance-6.0.2.2777062-system.qcow2
qemu-img convert -O qcow2 -o compat=0.10 vRealize-Operations-Manager-Appliance-6.0.2.2777062-data.vmdk vRealize-Operations-Manager-Appliance-6.0.2.2777062-data.qcow2
qemu-img convert -O qcow2 -o compat=0.10 vRealize-Operations-Manager-Appliance-6.0.2.2777062-cloud-components.vmdk vRealize-Operations-Manager-Appliance-6.0.2.2777062-cloud-components.qcow2

So far, so good.

Importing the Image

We have two options for getting this into the right format for importing into the Scale cluster. We could edit the XML file and replace the name="scale/<uuid>" parts with the name of our new qcow2 format files (minus the .qcow2 file extension).

Or, we can just copy the vRO files over the top of the existing files, and the XML file will already be pointing at them. Either method will work, so do whatever you find easiest.

Once you have the disk image files in place, we go back into the Scale admin GUI to import the images.

Screenshot from 2015-07-03 15:52:28

In this screenshot, I’ve renamed the file directory and XML file to vrops, and edited the XML file. You can just copy the disk files over the top of the Empty01 image you exported before and re-import it, in which case you’d only change the import path from this screenshot to be /vmstore/Empty01. The imported VM gets given the new VM name you set, so there’s no conflict.

We now have a new image in the Scale cluster.

Starting vRO

This part is super-simple: just boot the VM. If you fire up the console, you’ll get a view a bit like this, and you can watch the various firstboot install stages if you select the Failsafe boot option instead of the default.

The appliance image does a whole bunch of installation setup tasks the first time you boot, and it can take quite a while to complete, so I ran it in failsafe mode to keep an eye on things. Remember that I wasn’t all that sure it’d actually work, given that it’s not running on a VMware hypervisor (or even one with underlying tweaks to be ESXi compatible, like what the Ravello folks have done).

But it does seem to work. At least as far as the installation goes. There’s some kind of issue with the vRO cluster starting up for the first time which looks like something wrong with the data collectors. This is possibly because there’s no ESXi or vCenter for vRO to talk to in this environment, but that’s a problem for another day.

However, it does that OVA images can be imported into a Scale cluster without too much mucking around, and that means that other systems can get migrated onto a Scale without too much drama.

Now to investigate other methods, like the libguestfs virt-v2v and virt-p2v tools, for getting existing systems into a Scale cluster!

Bookmark the permalink.

Comments are closed.