TFD9 Review: Commvault

Commvault are well known for their backup and recovery product: Simpana, as well they should be. From all reports, it’s a great product.

I’ve not used it personally, largely because my consulting clients have mostly been bureaucratic old enterprises who installed Legato or NetBackup 15 years ago and have stuck with it because change is scary and hard. I’m most familiar with NetBackup up to version 7.x, but even that knowledge is fast getting irrelevant or simply forgotten.

Commvault integrates with lots of stuff, unlike some other niche products like Veeam. That’s good for enterprises, because they like to have one solution for the entire company. Where they gain in ubiquity they sacrifice in functionality, particularly for newer technologies like virtualisation. It’s just super expensive to maintain a codebase that can talk to ACSLS, DAT drives and VAX VMS, every flavour of Unix and Windows and still present a more-or-less unified interface.

Having said that, Commvault is still newer than NetBackup and TSM, so they don’t have as much legacy cruft weighing their product down. Hence people flocking to Commvault in droves and them raking in the cash.

OnePass

Simpana 10 has a thing called OnePass, which is all about only touching endpoint devices once. You have to grab all the data anyway, so why do it more than once? The backup takes a copy of the data, and you just link it with your archival and legal hold policies. Most companies have to do this anyway, but need to use different applications to do it. Why not converge them?

Commvault have, and they call the resultant data repository the Content Store. This centralised content store concept is what allows you to do neat stuff with all the data.

As an added bonus, the storage requirements are lessened, because you’re not keeping zillions of copies of the same data. You need to have some copies, because that’s what a backup is, but you don’t need to have more than necessary.

More important than the storage quantities is the time taken to make the copies. Touching a roaming user’s laptop twice, once for backup and once for archive, is stupid. Touching things once and being smart about what you do when you have to touch them is a vastly superior approach. It’s so obviously better one wonders why we don’t already do this as a matter of course.

I love solutions that seem obvious in retrospect.

Access Your Data

The basics of backup and recovery are boring (terribly important and necessary, but boring) so I won’t rehash them here. What interested me about Commvault was the way they’re using the stuff they have to do new and interesting things with it.

Commvault have apps for phones and tablets, but for more than just admins wanting to run a restore for a user remotely. They are also for self-service (great for keeping OpEx down) but even better than that is for users to be able to access their data from anywhere.

Say you’re on the road and need to look up some information in an email or a document. You can use the Simpana Edge app on your phone to get it. Like in the video from the TFD9 demo. This is from the backup system, not the production system (whatever it is). The centralised point of access means it’s easier to set up and maintain, and you avoid the n(n-1) problem of point-to-point interfaces.

Because the backup system has all the data. It has to. So since it’s already in there, and you needed to be able to provide self-service capabilities, why not just let the user view their documents using the same access mechanism? Why set up a separate mechanism for the same function?

You have all the same security problems to address, so if you’ve done them for restore access, you’ve already solved those problems for view access as well. Now you don’t need to buy a separate system to provide BYOD access to files. Nifty!

Legal Hold

The legal team can do legal discovery searches using an interface into the content store, with role-based access control, of course. No need to make a separate copy of all the data just for legal reasons.

And now if they decide to put a legal hold on things, it will keep the data regardless of whether it’s in the backup, the archive, or the email journal. The data is logically in the one place, the content store, and now backup/archive/journal are just “views” on the content.

Legal discovery and hold is super-boring (unless you’re the target) but super-important, so again, making this simple and easy is a smart move by Commvault.

It’s All About The Information

Commvault have realised an important truth: the backup system has to have access to all the systems, and all of the data. By definition.

The historical purpose for backups is for restoration if you break or lose something. It’s insurance. And, just like insurance, it’s really boring and often underfunded. What if we could make active use of these systems? Like using your DR system for testing or a build slave or whatever when it’s not actually standing in for production, which is most of the time (you hope).

Most people feel that these systems are just sitting there, somehow being wasted. They’re wrong, in the same way that having insurance is a use of money, not a waste. But politics is perception, so if we can use these systems in an active fashion, it’s an easier sell to the people who sign the cheques.

Commvault have realised that because you have a backup system, you already have a centralised data repository, so why build another one for reporting, or roaming read access, or legal discovery? Why not make better use of the one you have?

If you’re not already welded to another backup solution, I highly recommend checking out what Commvault have to offer.

 

Bookmark the permalink.

Comments are closed.