BlogFest2: Symantec

This is part of a series on SNIA BlogFest 2.

Symantec were the last vendor of the day, and we arrived more or less on time at their offices in North Sydney, which is quite a feat given the distances between vendors all day.

We were met by Michael Porfirio, Systems Engineering Manager, and Paul Lancaster, Senior Director, Systems Engineering.

Symantec’s View of the World

Symantec see a couple of clear trends in the marketplace. Utilisation is an issue, particularly with storage devices. Many customers are only using 15-20% of their deployed storage for active.

NAS is a lot more popular these days, compared with SAN. I assume they mean file storage devices rather than traditional block level devices, so CIFS and NFS devices.

Curiously Symantec see enterprise data centres as a mature market. I raised an eyebrow and made a note at the time, but didn’t question them, as we were just getting started with their session. Pointed questions would come later.

But now, having had a think about it, I disagree. A mature market exhibits little innovation and flat or declining growth. Data centres are springing up all over the place, and there’s huge innovation going on in power and cooling driven by higher costs and the “green” trend. To call this market mature is, well, wrong. I just can’t see any evidence to support their view, but plenty of evidence to the contrary.

Cloud Washing Award

Symantec also have a lot of “cloud” on their minds, because it came up a lot. Too much, actually, and Rodney called them on it.

Right off the bat, Symantec used the cloud buzzword, and it wasn’t immediately clear what they meant by it. Many of the slides appeared to have just had the word cloud attached to them, but without any depth to back it up.

This was a shame, because we’d pretty much avoided any cloud hype all day. Everyone else was pretty clear that “cloud” is just lazy shorthand for a bunch of different solutions, and it’s fairly meaningless without more context. The word was used carefully, if at all.

Symantec Storage Array: The N8300

The surprise of the day for me was that Symantec are bringing out a storage array. It wasn’t entirely clear at first that it was new, because there were references to it being used in the US by a bunch of customers. On clarifying, these are special customers who have early, “pre-launch” versions of the device. It only seems to have been announced to the market on 12 April 2011, and will be officially launched at Symantec’s Vision conference in Sydney on 13 September 2011.

So this is a bit of a scoop. Sort of.

Key Details (initial version)

The device is a CIFS/NFS serving array built with Huawei hardware, and running some Symantec software.

It uses Symantec’s Clustered File System between multiple nodes (up to 6 according to the website), and variable extent sizes, from 2k to 64k in size. The OS is based on a version of SuSe linux. There are probably some SPECfs statistics available somewhere. It supports gigE and 10GigE interconnect between the nodes.

It has the ability to do some sort of Namespace migration from existing storage devices, but the details weren’t readily apparent, and I was a bit confused by this part of the discussion.

It can act as a destination for Enterprise Vault, and is being used internally as the place the 65 petabytes of data from the Norton Backup Service is stored.

You can use the unfortunately initialed Veritas Operations Manager to manage them.

It can track how often specific files get used, rather than relying on access time markers which only tell you when it was last used.

There’s also a software appliance version, so you can run it as a VM.

When Bad Marketers Attack

Reading that last section has, I hope, given you some idea of the disjoint and confusing way the device was explained to us.

The slide deck the Symantec folks had to work with (delivered from on-high, and under instructions that it must not be changed) was fairly ordinary. Rodney commented that one particularly confusing slide would have been better explained as a diagram. That’s not the fault of the guys in the room, but whoever came up with the slides back in the US.

There was cloud.

The worst part of it all was that there’s a good story underneath it all. It was only after some detailed and fairly pointed questioning that ran into stoppage time that I was able to get a real sense of what the product actually was. And it has the potential to be really quite good, and I’ll explain why.

But first, a short history lesson.

Veritas Volume Manager

Back in the day, circa 1997, if you were in any way serious about your application staying up, you would run it on a Unix server. More than one, usually. And there was a good chance your Unix server would run Solaris, or HP-UX. Both had serviceable, but fairly basic, volume managers that would do software RAID.

If your application was important, then you would routinely replace the inbuilt volume manager with Veritas Volume Manager, or VxVM. DiskSuite was fine for very simple RAID, like root disk mirrors, but for real RAID you either bought a hardware device like an A1000 disk controller, or you used VxVM. VxVM gave you power.

You could configure disks into groups, to protect against failure using RAID. You could stripe them in various ways. You could re-stripe them. Online. You could replace failed disks online. You could move data around dynamically.

You could combine VxVM with the Veritas File System, VxFS, and then you could dynamically resize filesystems, up and down, online. You could add in the QuickIO feature to speed up database performance.

You could add Veritas Volume Replicator to copy data to a remote server for disaster recovery. Sync or Async.

You could add Dynamic Multipathing, to use more than one I/O path to individual drives, and remove all your single points of failure.

You could add Veritas Cluster Server to automatically detect issues with your server, or application, and automatically fail your application over to a different server. Or just parts of it. And it integrated with VxVM, VxFS, VVR, and all the rest.

Halfway into the 2000s, Symantec bought Veritas, but the acronyms remain.

Why am I telling you all this?

Because I want to impress upon you that Veritas software has been regarded as the serious choice for Unix admins when implementing robust servers for many, many years.

And it’s what they built the N8300 from!

From Physical to Virtual

After some pointed questioning about this product, a phrase finally came out that I think well describes what this product is about. Symantec are taking what they’ve always done in the physical world, and making it available to the virtual world.

What a great story!

The Vx line of software has an excellent pedigree as enterprise grade; they’ve been doing things other storage arrays can’t (I’m looking at you, NetApp online volume moves between aggregates) for over 10 years! An array built with this software and the myriad little improvements to a solid, solid core of code, and you have a stable, robust, flexible platform for serving data.

And yet…

And yet there are problems.

VxFS doesn’t have dedupe or compression yet. They’re on the roadmap.

Space optimised snapshots are also on the roadmap. Snapshots for both VxVM and VxFS have been around for a while, but they tend to chew up a fair bit of disk.

Of these, dedupe is probably the big missing piece, so Symantec will need to get moving on getting it into the mix in order to compete well with other storage arrays. Ditto compression and space efficient everything.

The target market for this array isn’t clear. Symantec see a lot of growth in the Linux market, but I’m not sure why Linux users would be drawn to this array over any of the others. Do many of them appreciate the pedigree of Veritas software? Does it even matter any more?

If we’re taking things to the virtual world, then that means a compelling story for VMware folks. Rodney didn’t know anything about VxFS or VxVM. And this is a smart guy who’s been in tech for a while now. His background is in software and, I believe, Windows, not Unix, so it’s not surprising that he’s never come across VxVM before. He’s now a Principal Architect for Data Center and Cloud and has been a vExpert for three years running. How are Symantec going to sell to people like him?

And the marketing has so far been desperately confused. Branding everything as “cloud” doesn’t provide clear benefits to me as a customer. Why would I choose this array over anyone else’s? It doesn’t have all the features the others do.

Is it faster? Easier to use? VxVM has a powerful commandline, but easy to use is not the phrase I’d first choose. And unless the GUIs for Symantec products have gotten dramatically better in the last year, they’re not going to win over people that way either.


This is a new product from a vendor not known for selling storage arrays. As an unknown quantity, Symantec will need to provide some sort of compelling point of difference to attract interest. Otherwise they’ll never cross the chasm between the early adopters who’ll kick the tyres on anything new and shiny and the more mainstream corporates that are Symantec’s bread and butter.

And they’ve got to work on their marketing, or their poor sales guys are doomed.

I’ll be watching this one develop with considerable interest.

Merch disclosure:

  • None. Just water.
Bookmark the permalink.


  1. Great write up, but seriously, just water?
    Not even a pad or a pen?
    I can see why they need to work on their marketing. #;-)

  2. We don’t really need more branded trinkets. Better to save the money and invest in better slide decks.

Comments are closed