TFD18 Prep: Datera

This is one of my traditional preparatory posts for Tech Field Day 18.

 

Datera Logo

Datera hired a bunch of new senior executives in 2018. In January, it got a new CTO in Hal Woods, then in May Narasimha Valiveti was hired as VP of Engineering and Product. June brought a new Chief Marketing Officer in Chris Cummings, and in December Guy Churchward took over as CEO after joining the board in August.

That’s a lot of new people in key roles, which suggests either a change in focus and direction (the infamous pivot), or the need to get some more experience in to take things to the next stage after reaching an inflexion point of growth. I suspect a little of both, and the changes appear to be working.

Founder and President (and now ex-CEO) Marc Fleischmann blogged about Datera’s growth, and the figures sound encouraging. 240% revenue growth and 133% customer number growth in six months (albeit off a small base) is a good start to some traction. Datera has built out a solid list of partnerships, including newly announced partnership with HPE.

The Tech

Datera sells storage software. Originally aimed at service providers, it supports API-driven intent-based provisioning and employed a lot of smarts to lay out where storage would actually come from. My notes indicate I last spoke to the company in November of 2017 and Fleischmann told me Datera was doubling revenue every quarter. It was doing well with low-latency storage requirements for workloads like CDNs and edge-cloud locations. Stateful data portability was also a helpful feature, given the software provided a nice abstraction away from hardware.

The kinds of companies that need this sort of abstraction above hardware tend to be service providers and telcos, and some large enterprises. The ones that choose startups like Datera tend to have significant challenges that the more traditional storage providers aren’t a good fit for, and that seems to have been a sweet spot for Datera as well.

Outside of specific use-cases, most organisations prefer to consume software-defined storage as-a-service via their cloud provider. Think S3, or whatever is attached to your compute instances in your cloud provider. Maybe you add some additional block storage if the amount available in the pre-packaged instance isn’t enough. So far, uptake of “add on” storage (such as NetApp’s Cloud Volumes that I mentioned in my prep post) seems to be fairly limited.

The other major use of software-defined storage in enterprise is HCI, such as VSAN or Nutanix, but here it’s so tightly coupled to the hardware that it’s functionally the same as the storage software you get when you buy storage hardware. Raw disk (or flash) isn’t much use without software to drive it, after all. Yes, it’s abstracted a little away from the hardware into the cluster, but that’s basically just moving the abstraction boundary up a level from the JBOD RAID array to the servers. You could even consider it as moving compute into the JBOD enclosure if you draw the enclosure boundary around the clusters.

The flexibility of software starts to come into its own once you hit a certain amount of scale, such as in a service provider (providing multi-tenant storage to lots of customers) or a large enterprise (especially IT shared services setups that logically function like service providers). The physical devices become so numerous that you’re essentially always replacing or fixing some of them, and so the service you’re providing (storage) needs to stay up while the replacements happen. This is the principle behind everything from RAID to HA to cluster to BCP to Kubernetes to cloud. Prior to that point, you don’t have a big enough problem to invest in the processes and automation required to run things that way, so the more manual way is annoying, but good enough to keep you going.

Generally organisations have to go quite a ways past the good enough phase and well into everything is on fire before they decide to change. There’s enough social pressure out there now that Automation is Good Actually that the naysayers and people who insisted that Things Are Different Here are becoming less influential. It’s still relatively early, but it does feel like the major initial barriers have been cleared and it’s now just a matter of time. Much as cloud is no longer a dirty word, but neither is it a religion you have to convert to. It’s just a technology tool you use when appropriate, and there’s a lot better information available now about what appropriate actually means.

The push towards the DevOps style of operating IT lends itself towards software-based options like Datera. All the hardware-centric storage vendors have gotten on the RESTful API bus, so the distinction between hardware- and software-defined is getting pretty blurry. That’s both a blessing and a curse for Datera; software-defined storage stops being weird but it also stops being special, so standing out from a traditional (and better known) vendor becomes more challenging. Datera’s advantage is that it doesn’t have legacy baggage to slow it down, and it can use traditional vendors (like with its partnership with HPE) to help buy credibility and get into larger enterprises that feel nervous about using startups.

I expect we’ll see some new product/feature announcements from Datera, as they’ve scheduled a couple of Tech Field Day appearances this year, and the new management has been dropping hints for several months about things they’ve been working on.

Disclosure: Datera was a past customer of PivotNine.

Bookmark the permalink.