TFD10 Prep: Caringo


This is part of my series of posts for Tech Field Day 10.

Caringo is an object storage software company, essentially.

Caringo make a thing called Swarm, which is scale-out object storage software. It can run on physical servers or VMs, apparently, and supports lots of different access protocols including S3, Swift, NFS, SMB/CIFS, iSCSI, and HDFS. That sounds pretty cool, and if I could set up a multi-VM/multi-physical filestore type thing with appropriate data protection capabilities in my lab, that could come in quite handy.

Swarm is available for a free trial that supports up to 2TB of storage (pretty good!) which I had a go at grabbing and installing. There were a couple of snafus in the process of getting the VM image, such as not getting an automated “here’s the download link” after signing up (there’s a manual approval step), which was a little odd for 2016. Still, the friendly people at Caringo got me sorted out, and I managed to get as far as loading up the OVF into my Ravello account and spinning up a few VMs.

Alas, the documentation isn’t quite detailed enough to easily follow step-by-step as someone completely new to the process, so I was able to get the VM booted and running, but wasn’t sure how to configure it to actually do anything useful. And then I ran out of time and wasn’t able to get back to playing around with it to figure out where I’d gone wrong.

When I say Caringo is an object storage software company, I’m not entirely sure. That’s what I think they do, but it’s not entirely clear why I should care about that. What’s the software for? Caringo lists a bunch of possible use-cases on their website, which all seem to be situations that need large amounts of data that needs to be online, i.e., not on tape. Okay, but now what?

Last night after dinner, I was talking with fellow delegate Chris Evans about this very thing. We’re not sure what all the various object storage type solutions are really for.

I want to understand who the customers for Caringo are, and what they’re doing with the software, and particularly why they need this kind of solution and not other possible options. I don’t have a good mental map of the market that Caringo play in, and where they are on that map, and it wasn’t easy for me to figure it out from reading their website and doing some basic research.

That concerns me, because if I’m a potential customer, I need to be about to figure out if Caringo is an option that should be in my initial set of candidate solutions. That’s a simple yes/no question, and it happens before I dig into things in more detail and start comparing options against each other. I have to figure out which options I should bother finding out more about before I potentially waste my time on research of an option that is obviously not for me. I don’t go shopping for shoes in a women’s lingerie store.

Hopefully Caringo can help me to understand their market and where they fit in it. And then I can spend some more time getting the software running to see if the reality matches the marketing.

TFD10 Prep: VMTurbo

This is part of my series of posts for Tech Field Day 10.

VMTurbo Logo

VMTurbo make a piece of operations management software for helping to automatically manage resource allocation in your virtualized envionment. I’ve seen them present before, at Virtualization Field Day 4, and have used the software in my lab. Last time I was intrigued by the way they used a free-market analogy when talking about how the software works. It’s an imperfect analogy, but it works quite well in the way it models resource usage.

The interface is simple without being simplistic, and puts useful information right in front of you quickly. I haven’t looked at the latest version yet, but I believe it was moving to an HTML5 interface instead of Flash, which is good, because Flash is just an enormous security problem and I want as little of it as possible in my environment.

I also see that their latest marketing mentions VMTurbo is used alongside VMware’s vRealize Operations (VROps). I did a paid gig reviewing VROps a little while back, so this is interesting. VMTurbo say that VROps is good for monitoring against baseline, while VMTurbo is good for getting your environment into the right place and trying to keep it there.

This suggests that people use VROps for monitoring and alerting for troubleshooting problems, and VMTurbo to keep resources optimally allocated automatically.

Fair enough.

I can’t imagine it’d be that hard to use VMTurbo to highlight problems as well, though. Essentially, VMTurbo’s whole design ethos is to help you bring your system under statistical control by determining where the mean is for the control baseline, and how much variance is permissible before the system takes action to reallocate resources. That should mean you can put most of the system under automatic control and work around problems as they occur, provided there is a certain level of safety stock available.

VMTurbo represents an excellent trend in IT operations to start using techniques that revolutionised manufacturing in past decades. The industrialisation of IT is long overdue, in my opinion, and I’ve been banging on about all this stuff for well over a decade.

I look forward to heading about how VMTurbo have added to the already good baseline product, and where they’re taking things in the next couple of years.

TFD10 Prep: SolarWinds

Solarwinds Logo

This is part of my series of posts for Tech Field Day 10.

I last looked at SolarWinds about a year ago, for Virtualization Field Day 4. Back then, I did a simple forecast of the full year results based on the quarterly results to that point. I was pretty much on the money, so an extrapolation from the third quarter results to the full year results is close enough to be usable.

I’ve done that again this year, because the full year results for 2015 aren’t yet available.

SolarWinds are about to go private, really soon. This is an important thing to take into account when looking at the financials, because the go-private transaction was announced in late October 2015. The stock price was $50.20 just before the announcement, and immediately jumped to $58.31, just under the transaction offer of $60.10 per share. Why was it a bit lower? Because there are risks that the deal might not actually go through, and investors want to be compensated for that risk. It’s not very unlikely though, which is why the stock price is now pretty close to the offer price. At time of writing, SolarWinds stock is trading at about $59.95.

But why go private? Here are some hints:

SolarWinds Return on Equity and Return on Assets chart (Source: SEC filings and eigenmagic analysis)

SolarWinds Return on Equity and Return on Assets chart (Source: SEC filings and eigenmagic analysis)

SolarWinds common size income chart. (Source: SEC filings, eigenmagic analysis)

SolarWinds common size income chart. (Source: SEC filings, eigenmagic analysis)

We see that SolarWinds hasn’t been making as much money from the money invested in the company by shareholders, which is what Return on Equity summarises. It’s mostly due to Return on Assets dropping, which as I said last time is all about operations: the company’s actions in the field to get people to buy its products and services.

These two chart illustrate the underlying problem for SolarWinds. The first is the changing makeup of what SolarWinds sells:

SolarWinds sales sources (Source: SEC filings and eigenmagic analysis)

SolarWinds sales sources (Source: SEC filings and eigenmagic analysis)

We see that outright product sales have been declining for many years, being replaced by services (which includes things like maintenance products previously purchased). Over the past three years, SolarWinds have introduced subscriptions. This is fine, because it turns volatile products and maintenance sales into the more predictable subscription renewals. Smoother cashflow is nice.

But there’s a problem with this plan:

SolarWinds gross margins by sales category (Source: SEC filings and eigenmagic analysis)

SolarWinds gross margins by sales category (Source: SEC filings and eigenmagic analysis)

The gross margin (price less cost) for services isn’t as good as products, so you have to sell more services to make the same amount of money as products. It’s much worse for subscriptions. The result of this trend is that as SolarWinds changes its sales mix to be more subscriptions and services, it has to sell a lot more in absolute terms in order to make the same amount of money as it used to selling products.

It’s a big transition.

And it’s the sort of transition that companies are finding is easier to do as a private company compared to a public one. We saw that with Dell, and one of the financiers there was Silver Lake Partners which is also involved in the SolarWinds deal. Maybe it’s an Austin thing, or maybe Silver Lake is building a reputation for helping tech companies to restructure themselves when big changes are needed.

Another important detail going into TFD10 for me is that SolarWinds have been investing heavily in Sales and Marketing and Research and Development in the past two years. It’s a shame we don’t have the final quarter SEC filings, because I want to see if those investment rates have stayed up after the go-private transaction was announced.

Why? Because if this was a “strip the assets to pay off debt fast, then re-float a lean but hollow shell” type thing (like what happened to Dick Smith Electronics in Australia; this is an excellent write-up of how you can read financial statements to see what’s really going on) I would expect the R&D or Sales and Marketing tap to be turned off so there’s more cash in the company.

I actually think that SolarWinds wants to have more flexibility to make hard choices, and quickly, without having to deal with public market shareholders in order to get their approval for the plans, particularly when you have activist investors who are just looking for a quick buck. Instead of spending months placating massive egos, or worrying about the opinions of the under-informed, SolarWinds management–and the major shareholders–can just do what they think is best.

If they’re right, they make loads of money. If they’re wrong, they lose a lot. But that’s the fun of running a company.

What interests me for TFD10 is what SolarWind’s product strategy looks like in this context. What stuff are they building, and why? Who are their customers, and what problem do they have that SolarWinds wants to help them with? What does the solution look like, and why is it a good fit? And how will that evolve over the next few years?

Should be interesting!

TFD10 Prep: Platform9

Platform9 Logo

This is part of my series of posts for Tech Field Day 10.

Platform9 are an interesting company, and seem to be well positioned to take advantage of the growing maturity of OpenStack as an on-site cloud-style deployment method.

I spoke to CEO and co-founder Sirish Raghuram earlier this week as a lead in to TFD10. The audio will get posted on The Eigencast whenever I find the time to edit and produce it, sometime over the next couple of weeks, probably.

When I first met Platform9 at Virtualization Field Day 4, I was impressed with the idea behind what they were trying to do: make something inherently complex easy to use. I went so far as to give it a try myself, which you can read about here. Short version: yep, it works, and it’s easy to use.

There are two main things that impress me about the way Platform9 have gone about things. Firstly, they are focused on a specific use-case–enterprise technology people who want to deploy cloud things and like the openness of OpenStack–, and they have a clear idea of what that person wants to do. There are plenty of opportunities to do something else, like have an on-site version of the controller instead of operating a Software-as-a-Service model, but they say no to those opportunities to stay focused.

Secondly, they support a heterogeneous environment for their customer–KVM and VMware are both supported hypervisors–, because they understand that their target customers–enterprises–don’t have a single monolithic environment. Mergers, acquisitions, shadow-IT brought into central-IT, there are plenty of ways that large environment can end up with multiple solutions to the same general problem. Throwing it all out and starting again isn’t justifiable in most cases, so it’s smart for Platform9 to make it easy (there’s that word again!) for customers to use the software without breaking their existing environments, or having to throw out a bunch of functioning gear.

Platform9 allows a company to incrementally improve, which, given the success rates of major IT projects, is a smart way to do things.

There is one discordant note in the symphony of simplicity that Platform9 are working on: when I asked Sirish which type of business he was building–low-margin, high volume or high-margin, low-volume–he chose the former. That doesn’t match up with the marketing in the carousel on their website, one pane of which boats that Platform9 is “The only private cloud with white glove service.”

White glove service is, to me, a premium offering, which should attract premium prices or it’s not sustainable. True white glove service is expensive to provide, so if you don’t cover the cost of providing it with an appropriate price, then you’ll lose money. That’s not a sustainable way to run a business.

Chalk it up to the inevitable missteps of running a startup, I suppose. I’ll be digging into this possible disconnect some more this week, and hopefully this is just a minor error rather than an indicator of a deeper misalignment that could derails things later on.

Update 1 Feb 2016

Sirish contacted me about this post to clarify the white glove service angle, and it’s a good response, so here is his email:

I wanted to share my thoughts on the consistency, or perceived inconsistency, between the marketing message “white-glove service” and the business model “high volume, low margin”.

We chose to go low-margin and high-volume because we believe that is a key facet of the public cloud that has led to massive success, and the private cloud can and should beat the economics of the public cloud.

Traditionally, white-glove service has been provided with extensive consulting and professional services. Clearly, as you has point out, a traditional white glove service model wouldn’t be viable.

At Platform9, we’ve spent a significant portion of our R&D budget on building a white glove service experience into the _product_, aka our SaaS platform. By this, we’re referring to the ease of onboarding with a white range of existing environments, and the customer success experience going from there on. Both of these are primarily provided by the _product_, not people, and that’s what differentiates us and enables a highly cost and time efficient delivery model.

I hope this helps clarify. I look forward to seeing you in Austin soon.

TFD10 Prep: Rubrik

Rubrik Logo

This is part of my series of posts for Tech Field Day 10.

Rubrik are one of a number of companies who have decided that the secondary storage market–basically everything that isn’t primary storage–has only seen incremental changes for the past decade or so, basically since Data Domain was a new things.

Back in the day, Virtual Tape Libraries were all the rage, but no more. Tape is old and boring, and outside of specific use-cases, it’s not as much fun to use as disk. The new style of secondary data company is heavily VM centric, because most organisations are, and it’s a lot easier to move data around if it’s encapsulated in a VM, to be quite honest.

Because the data is contained within the logical structure of a VM, it’s much, much easier to do things like take a copy of the VM over to the datacentre over there, and then turn it on if we need to. Doing that with physical devices means you need a 1-to-1 mapping of physical to physical. Moving target location means picking up a box and moving it. Not so great in today’s cloudy world.

The new approach is to have something sitting in the back-end that abstracts away all the messy metadata and schedules and data storage and everything and makes it look like a magic black box. Then you just talk to the black box if you need your data back at some point, and it goes and figures out how to find it.

That was the promise of previous backup solutions, but the reality was a messy accumulation of twenty years worth of master server this and media server that and tape silo managers and robots and rotation schedules and oh-god-kill-me-now. Actually the tape robots are pretty cool, and if you’ve never seen a series of octagonal StorageTek 5500s handing tapes to each other through the interconnect module, you really should check it out. Watching the SL8500 tape arms whizz along the track and not run off the end is pretty excellent as well.

But I digress.

Putting your backups into a great big pool of disk makes a few things easier to do, not least restores. It also means you can do things like global de-dupe and compression, which saves you lots of space when operating system images are mostly the same as each other. This is something that Rubrik highlights as an advantage of their architecture compared to something like Data Domain, because they use scale-out instead of scale-up, which can match the scale of large data sets more efficiently than a scale up. It’s not a free trade-off, because scale-out is harder to design and build than scale-up, but the technology has been around long enough now that a lot of people have good experience with how it works and it’s getting baked into more and more stuff.

Rubrik have taken in about $51 million in venture funding in two rounds so far, the last a $41 million series B in May 2015, just two months after taking in $10 million in March 2015, so they should be pretty well funded for growth and R&D. Rubrik presented at Virtualization Field Day 5 in June last year, so it’ll be interesting to see what they’ve managed to achieve in the past 7 or so months, and where they’re planning to take things.

Secondary storage is a big, big market, and competition is already pretty substantial–though nowhere near the current primary storage market, which is brutal–so I’m keen to hear how Rubrik plans to differentiate themselves and what they see as the value of secondary data.