SNIA Blogfest 2010: EMC

This is part of a series on the SNIA Blogfest 2010.

EMC were the first vendor of the day. We assembled at their offices in Miller St, North Sydney and were greeted by Clive Gold, Marketing CTO for Australia and New Zealand.

We filed into what looked like a custom designed briefing room. An arced desk faced a projection screen, with a podium of A/V gear at the far end backed by a window into a shiny data centre full of EMC gear.

Clive fired up his slide deck, and we were away.

Raw vs. Usable

Straight out of the gate, Clive scored bingo points for ‘storage efficiency’ and got the EMC 20% guarantee out there, and then dissed ‘another vendor’ (more bingo points) for their guarantee. Yawn.

I asked about $ per GB and TCO, suggesting that for my customers, this was a more important measure. I don’t care if I have 9 zillion exabytes of raw if the cost of providing 40 GB of usable is lower than 40 terabytes raw of a different vendor’s disk. Clive then said something very interesting indeed:

He spoke about how all the base components, like the physical disks themselves, are commodity parts. All the storage vendors buy harddisks from the same few companies: Seagate, Western Digital, Maxtor, etc. They have standard interfaces, standard sizes, and standard performance characteristics. There’s not a lot of room for differentiation on raw disk.

This means the differentiation has to be in how the different vendors use the raw storage to provide usable disk. He was talking about how different vendors would provide storage efficiency in how they convert raw to usable, which is software, and that EMC’s was better (of course). Clive stopped here, but the implication is clear: the storage vendors don’t make money on hardware. All the margins are in software. Software is where the game is won or lost.

Clive also made the call that for any other vendor’s niche play, EMC would have that feature. I have a note here regarding “no ifs or buts” but I forget the exact context. I recall it related to the ability to make use of features, and how there are often caveats about the specific ways in which it will work. Things like “you can have X, or Y, but not both at the same time”. What I remember is that immediately after saying there were no buts, Clive talked about an EMC feature that had a ‘but’ attached. Oops.

Clive had sidestepped the question about TCO, but we moved on to other things.

Dedupe

More bingo points.

Clive tried to frame the debate by talking about something called ‘Real Dedupe’, and I’m still utterly in the dark about what he meant. He then spent an inordinate amount of time talking about how NetApp dedupe sucks, and how EMC’s is better. He went into technical details about how WAFL works and how that effects dedupe. The big problem I have with this is that he was completely wrong. Someone’s fed him some bad information about how NetApp WAFL and dedupe work.

One bombshell for me: Clive admitted that NetApp get better storage results with dedupe of Virtual Machines than EMC. He also said that EMC do better overall, because they make it up with better dedupe of other kinds of data. Given the explosive growth in VMs of late, this seemed like quite a significant concession to me, and I don’t quite understand why Clive would point this out.

NetApp

An aside here: Clive spent a lot of time talking about NetApp, or as he put it ‘NetApps’. It was.. a little weird. “Methinks thou dost protest too much” sprang to mind. For a start, to mis-pronounce the name of a major competitor, on every occasion, was odd. I usually only see that with customers who don’t like NetApp, or don’t know much about them. I couldn’t work out if it was a genuine oversight, or a subtle jab. Either way, we were there to talk about EMC, not NetApp, so I don’t know why Clive wanted to spend so much valuable time talking about a competitor.

The other odd thing was that Clive seemed to have a fundamental misunderstanding of the way WAFL and NetApp snapshots work. There are plenty of genuine issues with NetApp products, so why pick on ones that don’t exist? I can only guess that Clive’s been fed bad information from somewhere, and it seemed juicy enough to warrant attacking a competitor.

I guess this just highlights the dangers of bashing a competitor’s product: you’d better make sure your understanding of the issues is spot on, or you risk looking a bit foolish in front of people who know better.

XIV

Again, Clive engaged in a bit of FUD dispersal regarding IBM’s XIV product. He said that IBM “seemed to have dropped the product”, apparently in reference to the new StorWize V7000 product and a lack of messaging about XIV around the place lately. Well, IBM spent plenty of talking talking XIV to us, but I’ll leave that for my writeup of the IBM part of the day.

He also claimed that if you pulled 2 drives “from the same set”, and you’d lose data. This seems to be the hoary old chestnut of double-disk failure that gets trotted out whenever people want to diss the XIV. It’s not true, and again, there are better things to pick on if you want to have a go at the XIV.

FAST

Clive moved on to talk about Fully Automated Storage Tiering, or FAST. Somehow I’d missed what the acronym stood for in all the blurbs on this feature. I don’t know much about the technical details, so I’m running off Clive’s spiel. I can only hope he knows his own company’s products better than those of his competitors.

FAST sounds quite cool, really. It will move chunks of data, even sub-LUN, to avoid hotspots. It works by tracking statistics on chunks of data (‘extents’, in EMC parlance) and moving chunks that need more performance onto faster disk. It also moves chunks that need less performance to slower disk. It’s automated, as the name says.

This is a quite cool feature, as having to migrate customer data from one set of disk to another is a genuine, and frustrating, problem I continue to encounter in operational environments. Business units never seem to know what their requirements are (aside from ‘big’ or ‘lots’, and that they want storage to be free), so they invariably end up on the wrong kind of disk.

FAST does a better job with more CPU and memory to track the extent statistics, so, according to Clive, mid-tier systems in the EMC ecosystem (i.e.: not the VMAX) get the best results with about a 1 gigabyte extent size. That’s the sweet spot in the tradeoff for size vs. flexibility.

Also, EMC’s experience is that VMware tends to require more write-cache than other workloads, which have traditionally average out to being more read intensive. This is particularly true for VDI workloads, apparently.

Federation

This part of Clive’s presentation was the meatiest, in my opinion. Clive talked about how customers often had difficulty getting the advantages of new products or features in a cost-effective way, usually because of the constraints of legacy gear. It occurs to me that an uncharitable person could raise their eyebrows at this statement and cast aspersions on EMC’s product line, but I don’t know anyone like that.

It’s a good point, and Clive asserted that federation and scale-out is the way to solve the problem. Essentially you have multiple frames that are used to provide storage service, and you can do rolling upgrades of your fleet, migrating data around to keep the services online while you do it. Clive didn’t actually articulate the message this way, but I now realise what he was trying to say, and it’s a really good idea.

Clive mentioned storage Quality of Service (QoS), at which point I piped up and explained that I work in the interface between business and customers, and that the ability to deliver this virtualised storage service was often made challenging because of the lack of QoS features in storage vendor’s products. I was quite keen to find out more about this feature. I specifically asked if it worked as a penalty-style QoS (if capacity is maxed out, which workload loses first?), or if it could operate as a “you’ve only paid for 300 IOPS, so that’s all you’ll get, even if the box has 4000 left”.

I didn’t really get a straight answer, per se. We talked about how customers tended to buy more capacity than they really needed, and usually upgrade to get more performance. We talked about how customers tended to regard storage they’d paid for as ‘theirs’ and didn’t want to share. What Clive did say is that the partitioning of workloads should happen at the operating system level, not in the storage arrays. To deliver particular service levels or QoS to customers, the control point would be in the operating system providing the services.

I’m not entirely satisfied with this approach, but it made more sense after we got onto the next section.

Tribes vs. Re-aggregation

Clive talked about how things were moving away from tribes (or silos) to a re-aggregation and centralisation of control over things. You’d have to have been a rock living underneath another larger rock to not notice this trend. He mentioned a cloud (bingo!) operating system, and that VMware was that thing.

Clive talked about how EMC had so far not had the full level of integration with VMware that they could have. He said that this was because they were trying to stay at arm’s length from VMware since the acquisition so they didn’t disrupt VMware’s independence and their ability to work with other companies. He also said that EMC had realised that working at arm’s length didn’t mean not integrating as well as other vendors did, so they’d started doing more work in that area. Support for things like VAAI (he specifically mentioned the ability to ‘stun’ a VM being available) was getting a lot more attention. Clive did refer to VMware as EMC’s ‘little brother’, which I think is a trifle dismissive given some of the analysis I’ve seen of the relative market capitalisation of the storage and VMware parts of the business.

Clive talked about having templated designs for VMware that would be applied simply, so when designing a new system, you’d just plug in some numbers like “an Exchange deployment with 7000 users, each getting 1 gig of storage”, and the template would spit out a standard design that adheres to best practice. Clive talked about there being application software to add smarts or discovery to systems in order to sniff out poor practices so you could correct them.

I asked about the problem of identifying capacity issues in advance so that you could order more gear before you ran out. Clive said that EMC has applications that can help with this, but didn’t elaborate.

Clive mentioned Vplex, and how this would help to move people’s data away from the ‘physicality’ of their storage. People would stop pointing to specific frames as being where their data lives, and they would be able to migrate from older engines to new engines seamlessly as technology improved. Clive also highlighted that data didn’t have to be all in one place, and that data could exist in multiple physical locations, but that consumers wouldn’t need to know or care where their data was physically housed. Or at least, not as much as they do now.

The clear message was that the control of all this stuff would be placed firmly in the VMware administrators hands. On thinking about it after the day, it became clearer to me that Clive was talking about virtualised infrastructure that could be upgraded without outages, and that customers would benefit from new features without having to plan lengthy data migration exercises. Vblocks are, in Clive’s words: “some infrastructure”.

So things will move from business units buying a storage array or three, the switch gear, servers, VMware and their application and instead use pre-defined templates for a VMware based deployment to define how much infrastructure you need. And then you go buy that, like bricks for a house. “I’d like to order 6 tonnes of EMC infrastructure, please.”

Cool.

Unified vs. Specialised

Clive finished up by talking about having unified solutions versus specialised solutions. I was a bit confused, again, by this section. Clive seemed to be saying that EMC would maintain a diverse product range to cater for all the different specialised situations (like Greenplum, which he plugged).

It wasn’t clear to me where the ‘unified’ bit came in. Clive did say that FCoE is “the data-centre fabric of the future”, which I think is a big call. I think he really meant converged networking, so Ethernet for the cabling, but a variety of protocols over the one wire and Converged Network Adapters in the endpoint devices. Fair enough.

Clive also clarified that EMC would have the same level of support for Hyper-V as they do for VMware, but they wouldn’t for Xen. It’s a pure marketshare thing, as the return on investment of supporting Xen just isn’t there.

Clive was out of time as he had another commitment to attend, and unfortunately Mark Oakey (Marketing Manager – Storage, A/NZ) couldn’t make it as he was tied up in another meeting, so that’s where the official part of the session ended. We bloggers hung around for a few minutes to talk, since we had some spare time before the bus to the next session.

General Impressions

Overall, Clive handled the Blogfest a bit like any other Marketing briefing. He had his slides, and he wanted to stay on-message and get through his spiel. The bloggers gave him a good grilling on technical issues, but we didn’t really talk about business or customer type issues. It was very much a technical briefing, and if we started to drift, Clive was soon back onto his prepared track.

Halfway through I felt a bit frustrated by the lack of attention being paid to business or customer side issues, and wasn’t sure if it was just my inexperience or a fundamental misreading of the day. The bloggers talked about it after Clive had left, and Rodney suggested that because we were a predominantly technical style audience, that’s why Clive had his technical hat on. If we were CEOs or CIOs, he’d have given a very different spiel. I thought that was a fair enough point, so I resolved not to worry about the technical bent of things, and to expect that sort of thing from the rest of the sessions.

That said, I thought the overall message was a trifle confused. There was some good, solid content in there, and some good stories to tell, yet somehow it got lost in the noise. It felt a little bit like EMC’s offerings in the field: a bunch of different products, many quite excellent, but without a really strong unifying story to hold them all together, which is a shame. Rather than a checklist of “hey, here’s a bunch of cool stuff we sell”, I would have preferred a stronger narrative that highlighted why EMC’s products are compelling and why we should all buy them.

Merch: a slim leather folio embossed with EMC’s logo, an EMC branded ballpoint pen. This was handy, as somehow my pen had gone missing, so I used this throughout the day for my notes.

Next in the series: IBM

Bookmark the permalink.

4 Comments

  1. Pingback: Tweets that mention SNIA Blogfest 2010: EMC | eigenmagic -- Topsy.com

  2. Justin, Fantastic synopsis of the EMC visit. I couldn’t have written it better myself. Most enjoyable day I have had for a long time.

    For any other bloggers in Sydney reading this , I highly recommending joining SNIA and attending the next Blogfest, you wont regret it

  3. For a unified message, I’d recommend this….

    http://virtualgeek.typepad.com/virtual_geek/2010/11/now-this-is-solid-marketing.html

    I’ve done whiteboards around this (yes, they’re high-level) and it really pulls things together quite well.

  4. Pingback: Think Meta » Links and Whatnot, Take #1

Comments are closed