I first met Scale Computing at Storage Field Day 5, and I was impressed.
Their gear is ideally suited for small- to mid-size companies, often without oodles of bandwidth to the Internet to use cloud based services, and with on-site systems that need to be controlled, like manufacturers. The HC3x nodes are a great option for someone with a few dozen to maybe a hundred virtual machines, and are spec’ed and priced accordingly. It’s just a great product/market fit.
I was also very impressed with the state-machine/expert system they use to manage the operations of their system. It’s a fault tolerant, self-healing type piece of software that controls the gear, and I maintain that it has applications outside of HC3x cluster management. I’d love to see something like this make its way into OpenStack to manage the complexity of all the moving parts in order to ensure that it always stays up, and provides sensible alerts to administrators.
Because it’s the simplification of complexity that makes this stuff great. KVM and open source is all very good, but if you need a Masters in CompSci to operate it, then it’s always going to be a nice product. VMware became the dominant player because it abstracted away the complexity, and made it easy. The same for Veeam and VM based backup/recovery. The same, it would appear, for SimpliVity, and also what VMturbo are trying to do.
Notice a trend here? Work hard to make it easy for customers. Make the computers do all the hard work.
I wonder if that’ll be the trend for this week at VFD4?
I’m keen to hear what’s new with Scale. They’re a great bunch of people, and I genuinely hope they do well.