SNIA Blogfest 2010: IBM

This is part of a series on the SNIA Blogfest 2010.

IBM were the second vendor of the day. We travelled to their Pyrmont Street office, which is a bit of a fishbowl for the poor folks who work there. There’s a transparent glass partition separating the front entrance from the open plan office area, but that’s it. Zero privacy. Not my cup of tea, but hey, I don’t work there.

It’s an old heritage listed building, so there’s no permanent wiring, apparently; everything is wireless, including the phones. It’s all exposed wooden beams, and modern looking office gear, including auto-opening glass doors to get into the floor proper.

We were met by Jane Bounds, and ushered into a large conference room, and the difference in vibe compared to EMC was immediately apparent. We were greeted pleasantly, nay, enthusiastically by what seemed like a multitude of IBMers: Anna Wells, Craig McKenna, and Joe Cho were the main players, and there were a couple of support people as well. I would go so far as to say that we were welcomed. In the bus after leaving IBM, we all commented on the marked difference in attitude.

One big downside for me was the lack of 3G coverage by 3/Vodafone in the building. It’s in a dip, and the room was buried in the middle of the building, so it was always going to be a challenge. 3’s coverage in Melbourne has been pretty much fine for me, and much better than Optus, but I found 3’s coverage in Sydney to be generally pretty patchy. No WiFi or fixed line Internet either meant I was cut off from the tweet stream for the duration, which was a shame.

The Big Picture

ibm-snia-blogfest-handout

IBM’s Strategy for Storage

Anna Wells kicked off by giving us IBM’s overall picture of storage. Instead of bringing up the dreaded Slide Deck of Doom, she went straight to the whiteboard and started drawing. It turned out that what she sketched was basically the picture you see here. She’d obviously done this several times before. This worked really well, as it was far more interesting than having someone talk at a slide.

Anna talked about 40% annual growth in storage capacity requirements, and how this had been going on for years. She alluded to IBM’s ‘Smarter Planet’ marketing by saying that smarter devices, roads, etc. would all result in even more data being created, in lots of different data formats, and most of it unstructured. More CapEx and OpEx are required to buy more gear, and the space, power, and cooling to run it. Businesses want to keep risks under control but improve their service levels, such as through longer opening hours. Companies are more and more operating as 24×7 shops, so you no longer have the ‘luxury’ of weekends and public holidays to take outages. Always on business creates even more pressure on admins.

Anna said that existing systems are often large and complex, and therefore resistant to change, which makes it more difficult to get the advantages that change can bring. Similarly, people used to existing, complex systems built around information silos tend to be resistant to change.

Anna mentioned that CEOs and CIOs often had good vision of what needed to be done; the trick is in turning this into execution. I asked if, in IBM’s discussions with customers, they were seeing good alignment between the executives and the storage admins, and middle management. Anna said that there was mostly good alignment in vision; both the executives and the storage admins know what needs to be done and agree that it’s necessary. However there was a lack of alignment in execution: turning plans into action.

Anna said that the chief cause of all this was the admins didn’t have the time to work on strategy when they’re already busy with the day to day. This fits my personal experience. All too often a grand strategy fails because the people required to implement it are too busy running in circles keeping the lights on. Anna said that we need to free up people’s time so they can work on strategic things instead of getting lost in the daily grind. She said that it is “incumbent on us”, as an industry, to make things easier to implement and use.

Anna said that IBM’s large customers often wanted more fine-grained control over implementations, and to do more of the work themselves. It’s the medium sized organisations that want more solutions to be a black box that’s just delivered. This surprised me at the time, but on reflection I can see why this makes sense. Small and medium sized organisations don’t have the staffing levels to spend time on implementing the technology. They have business problems they need solved, and want IT to just do that so they can get on with doing what their business is about: selling their products and services to customers. It’s only the bigger companies that have a dozen (or more!) people in-house that they can put onto an ERP delivery project for 6 months.

What’s missing from the graphic here is where you can see my scribbled notes: IBM Solutions, Products and Services, which underpin everything else. Anna drew that up on the whiteboard, but it’s not on the slide. I think this helps to show why a whiteboard and a pen can sometimes be more powerful presentation tools than PowerPoint or Keynote.

Anna then handed over to Craig McKenna (Technical Sales Leader, IBM Systems Storage Growth Markets) to get stuck into the more technical aspects, and to specifically talk about XIV and StorWize V7000.

XIV

Craig told us that the key driver for customer adoption of XIV is operational ease of use. This is down to the GUI tool used to admin a fleet of XIV devices, which (I know from past briefings from IBM) is slick and shiny and makes it fast and simple for minimally trained people to turn on some LUNs. There’s bugger all tuning available, and there are no tiers or different RAID levels to set.

XIVs are based on a (‘very unique’, natch) grid architecture of multiple Linux based nodes that live inside a standard 19″ rack. There are a six designated as I/O modules, which handle the external connections (up to 24 FC and/or 6 iSCSI ports, no NFS or CIFS) and the rest just house disk. All the disk is SATA (1TB or 2TB drives), and each module has its own cache. There are 2 GigE interconnect switches (for redundancy) and 3 UPS modules.

Data is split up into 1 gigabyte chunks that are mirrored somewhere else in the unit. There’s a funky algorithm (they call it RAID-X) that spreads them around pseudo-randomly, though it excludes things like the same node so if you lose a node, you don’t lose both copies of the block. This directly contradicts something brought up in EMC’s session where Clive said that if you pulled out a single module (or maybe it was shelf of disk, it wasn’t that clear) you’d lose data. You’d have to lose both modules or disks that contain your data blocks within the RAID-X rebuild time which is 30 minutes max. on a fully utilised 1TB drive system; if your disks are only half full, the rebuild only has to find 500 MB of blocks, so it’ll take about half the time.

Snapshots are redirect-on-write, not copy-on-write, so they’re similar to NetApp snaps, but not quite identical. You get a bucketload more, though (thousands, not 255 max). Also better than NetApp snapshots: you can restore to any of them and not lose all the others. Replication to another XIV is FC or iSCSI. The interconnect is moving to Inifiband instead of GigE, which will speed things up a bit.

According to the other bloggers, XIV sounds a lot like a 3PAR. It also sounds a lot like Google FileSystem, based on what I remember from a presentation I went to a couple of years back, but Google called the chunks ‘shards’ and they kept more than 2 copies if the data was deemed important, which you can’t do with an XIV.

The positioning of XIV, according to Craig, is at the high-end midrange in $/GB usable, but with DS8800 performance and features. It fills a gap in their product set between the DS8xxx at the high-end and the mid-range DS-5xxx that they OEM from LSI. More on the LSI relationship in a moment.

A big win for me, compared to some of the other vendors, is that the XIV comes with all the features. There’s no extra licensing if you want thin-provisioning, or dedupe, or replication. If you were doing a $/GB comparison, you’d need to include that, as it can make a big difference to overall pricing. On the flipside, maybe you’re paying for features you don’t need?

Given that we spent a good 40+ minutes talking about XIV, it doesn’t look much like a dropped product to me.

XIV downsides

The big concerns for me are:

  • The lack of expansion outwards. When your XIV is full, you have to buy another XIV, you can’t just tack on another rack of modules and extend the grid, which I find a bit odd. IBM are moving to multiple frame interconnect, apparently, but there are no dates.
  • You have to pick all one disk size. If you put 2TB disks into what was originally 1TB system, they’ll look like 1TB disks until you replace all the disks. Then they all magically become 2TB disks, which is something, I guess.
  • SAS attached SATA disks (or nearline-SAS, as Craig called it) aren’t available yet. It’s on the roadmap, but many other vendors are already well into the SAS attached path and away from pure SATA interconnect.
  • It’s totally different to everything else in the IBM fleet, so if you have DS, you need to manage it differently. There’s integration with TPC, apparently, but I’m not sure how good it is. This is changing with the use of the XIV as the new frontend for the SVC and V7000 products, so it could be that the main reason for buying XIV was for the user interface.
  • All LUNs, all the time. No NFS or CIFS support, and, it appears, no plans for either.

StorWize V7000

The shiny new V7000. The StorWize name is from the compression appliance company IBM bought, but Marketing liked the name so much they thought they’d utterly confuse us all by applying it to a totally different product than the one we used to associate with the name. WIN!

It’s basically an SVC with some disk in it, and the XIV GUI to drive it. It fits in 1/2 a standard 19″ rack, and can contain up to 60 or 120 disks (1TB or 2TB respectively). Apparently in March 2011 that’ll go up to 120/240. You get all the features as standard, with the exception of external virtualisation (attaching it to other disk arrays like an SVC) or remote replication aka mirroring. So not quite like the XIV. Replication between a V7000 and an SVC is coming.

Back to the LSI relationship: Craig was pretty blunt when he said that IBM expect some cannibalisation of the DS5000 market by the V7000. That doesn’t seem to bother IBM much, which, given the poor quality issues experienced recently with the gear OEMed from LSI, could signal the beginning of the end for the relationship.

With some final comments about IBM’s compression appliances (not StorWize any more, remember!), Craig finally yielded the floor to Joe to talk about all things backup.

TSM

I’ll admit: I’ve never used TSM, and since we were now well over the 1 hour 15 minutes officially allotted, I started to glaze over here.

Joe started by talking about a product called TSM Fastback, which is apparently a completely different product to TSM. I guess Marketing couldn’t let go of a name they liked again.

Anyhow, Fastback is supposed to be easier to use than TSM. What’s so funny? It’s disk only, no tape, and volume based, not file. It’s supposed to be simple to operate and fast to restore (Fastback, get it?) and supports Linux and Windows.

It also has a funky feature where you can ‘mount’ a backup as a volume on the client and use it like a regular filesystem, including write to it. Writes aren’t committed to the backup, but they exist in some sort of throwaway cache. This means you could use TSM Fastback to mount up a database backup and the DBAs could mount it up and test it, or extract specific tablespaces or rows or whatever funky restore things they might want to do.

I’d perked up a bit here, because I’ve done an enterprise backup system design or two, and that feature is really cool. It’s not available in TSM Classic, alas.

Speaking of TSM Classic, 6.2 introduces client side dedupe. The jump from version 5 to 6 means a big jump in the server specs, so check them carefully before you upgrade. Apparently it’s to help handle the dedupe shenanigans required server-side.

TSM for Virtual Environments is the integration with VMware’s vSphere API, but it uses some sort of proxy server, which sounds a bit too much like VCB for my liking. It’ll do image backup and file level restore.

One More Thing

Right at the end (well over time and hungry bloggers are not to be trifled with) IBM wrapped up by answering one of the questions we’d sent through as prep earlier (it was mine, which is why I’m putting this here).

The question was “What, in your view, is the most important thing that is being overlooked by the storage industry today, and why?”

IBM said that it was “How are you using the data within your organisation”. The data is there to be used, not just stored somewhere, so organisations need to ensure that they’re extracting value from all this information they’re keeping. IBM want to help them do that.

I thought that was a quite an interesting answer.

Be Prepared

What impressed me most about IBM was that they were prepared. According to Paul Talbut, IBM were the only vendor to ask a lot of questions about the event. They wanted to know who was coming, their background, the kinds of information we wanted to hear about. And they tailored their message to suit.

During the event, they were enthusiastic about what they had to say, and it really showed. Anna was very polished, and had obviously given that whiteboard presentation numerous times before. Similarly, Craig is an excellent presenter, but I was even more impressed with his breadth, and depth, of knowledge. He was very confident in talking to us about basically anything at all, and I didn’t feel that he was pushing us away from whatever we were asking questions about in order to get back to his talking points. Joe is still learning, but got better after he’d warmed up a bit. Plus he was unlucky to be last after Craig had gone well over time, but then Craig was interesting, and we didn’t stop him.

Even better, IBM had a clear understanding of what they were all there to do, and a strong central narrative to tie it all together. Anna had started with the big picture, Craig and Joe fleshed out some details of where their products fitted, and the problems they could be used to solve, then it was all tied up neatly at the end.

I’ve had dealings with IBM in the past, including a stint sub-contracting for them, so I was skeptical going in. However, the team did such a good job for Blogfest that I can now see why customers would be swayed into going all-IBM. IBM have a good range of products that complement one another, and integrate well with each other, so you really do feel that IBM could be your one-stop-shop for all things IT. I haven’t been completely Blue-washed (as Craig put it), and we’ve already covered the gap between vision and execution, but I’ve surprised myself by being feeling much friendlier towards Big Blue than I was before.

Which I guess is what IBM were hoping to achieve. Good job, folks.

We broke for lunch (catered sandwiches and cans of soft drink, nothing fancy), and had little informal chats about various things. Lunch was a bit of a rush, but it was tasty. There’s a photo on several dozen cameras of the whole group posed against the back wall. And yes, we all joked about keeping ourselves in jobs chewing up so much storage/the need for dedupe/etc., firmly identifying ourselves as hopeless nerds, all.

Merch: A floppy sewn neoprene laptop baglet thing left over from some Alcatel-Lucent partner thing, and a plastic notepad thing with a cheap pen that is now serving as my car milage logger, which should keep my accountant and the ATO happy.

Bookmark the permalink.