This is part of a series of posts about Storage Field Day 9.
Cohesity is one of a relatively small number of storage companies going after the secondary storage market.
The primary storage market is awash with startups all boasting about how fast and shiny they are, and the crunch time for many of them has begun. However the secondary storage market, i.e. everything that isn’t primary storage, has been fairly sedate for a number of years. The last really big thing to happen there was Data Domain, which ended up in a bidding war between NetApp and EMC, and EMC decided to pay the most.
Secondary storage is a bit of a funny market as well, because it’s half primary storage that’s for second tier workloads, like test and dev, and half some other storage for backup and archival things, which is a mix of slower disk and tape. The software that drives it is generally a version of something first built 20-odd years ago: TSM, Legato/NetWorker, NetBackup, Commvault and which follows a master server/media server style of architecture.
Cohesity, and its rivals, is aiming to shake this market up a bit.
I’ve spoken to Cohesity founder and CEO Mohit Aron a couple of times, once about Nutanix, weirdly enough, because he’s still a major shareholder after departing the company back in 2013. I’ve also spoken to new-ish board member Dan Warmenhoven (ex-NetApp) on The Eigencast 005 about why he joined the company, among other things.
Cohesity’s goal is to be a kind of unified second storage platform that you have alongside whatever you use for primary storage. They see this as a very large market, which is probably not far wrong if you count up all the primary storage being used for secondary workloads as well as all the backup targets.
Primary storage in this world is for fast things. Transactional, responsive. Tier-0 or tier-1. Everything else should go somewhere else, and in the current world, it does, but it tends to go onto many something else’s. Backups go to tape, or a Virtual Tape Library, or a Data Domain or something similar. Copies of production for development, reporting, etc. are sent off to a different type of primary storage that’s bigger and slower than whatever is used for the main system.
This is what’s meant when Cohesity, and others, talk about Copy Data Management, which I’m sad to see is still on Cohesity’s website. I don’t like the term because it’s a clumsy and ugly term for what’s going on. Secondary storage is just a simpler and easy way to refer to what Cohesity is about that I think people can understand more simply.
Anyhow, once you have data on your secondary storage platform, it stays there and gets connected to whatever systems need access to it. That’s what’s meant when Cohesity say “bring the compute to the storage, instead of the other way around”. Cohesity uses fancy software to make clones of the data for use as test/dev, or reporting, or whatever else you need to do. Its aim is to have enough performance for those workloads that don’t need primary storage. Simple.
And because you’re making a copy of the data anyway, you get data protection for free. Or, because you need data protection, you get test/dev or reporting/analytics copies of your data for free. It’s an efficiency story both from a pure storage perspective, but also from an operational management perspective. No more dedicated backup team manually reviewing and configuring a zillion backup jobs that always fall over at 2am.
I think there’s a big market out there of people who want to retire the older way of doing things in favour of software that deals with all the complexity for you. Cohesity aren’t the only ones giving this a go, but I think the trend here will be the newcomers taking share from the incumbents, rather than them battling to take customers from each other. There’s enough market potential for all of the new players to grow strongly without having to compete too hard with each other.
It’s the legacy providers that will need to watch out.