BlogFest2: Dell

This is part of a series on SNIA BlogFest 2.

Update 2011-07-29: Download the slides from the Dell presentation here.

Update 2011-08-10: I’ve received an email from Dell with some clarifications and corrections, which are updated inline, with some extra information near the end.

Dell was our third vendor for the day, and they had been watching the twitter stream. This was one of the first times I’d seen evidence of a vendor keeping themselves informed of how the day was going. Given the open and informal nature of our tweets on a BlogFest day, this gave them some great insight into what we were looking for, and how the other vendors had performed.

At Dell we had quite a large contingent:

  • David Holmes, Solutions Marketing Manager, Australia and NZ
  • Boris Jirgens, Storage Specialist
  • Andrew Diamond, Storage Specialist
  • Charlie Lee, Solutions Specialist, Intelligent Data Management
  • Marty Filipowski, Senior Corporate Communications Advisor (@MartyAtDell)

After introductions, and a quick coffee order, we jumped right in.

Dell’s Strategy

Dell started off by describing their strategy, which was: To improve efficiency using an open standards approach and automation, and deliver a better Total Cost of Ownership to their customers. Fair enough.

Product Mix

To do this, Dell has four main product lines, just like HP. They are:

  • EqualLogic, for iSCSI only workloads.
  • Compellent, for FC and iSCSI
  • PowerVault, for entry level FC and iSCSI, aimed at the SMB and mid-level market
  • DX Object Store, for long term archival

You’ll note there are no file protocols here, only block. Dell have actually added a file protocol capable product into their EqualLogic brand recently, but it wasn’t discussed.

[email update from Dell:]

This was covered on slide 5 as attached…

Dell Scalable File System is based on IP that Dell acquired from Exanet. The NX3500 is available today with unified capabilities and we ship the same functionality for the EqualLogic Platform called the FS7500 (as announced at Dell Storage Forum). Both provide Scale-Out High Performance Primary Storage that supports both Block (iSCSI) and File (both CIFS & NFS access). Further, the FS7500 supports all existing EqualLogic arrays allowing customers to scale a single file share of up to 512TB.

FS7500 fully supports the primary Dell Fluid Data value proposition, including pay as you grow scalability, ease-of-use and inclusive software licensing. Dual controllers, redundant power supplies and fans, and battery-backed cache make the FS7500 an ideal solution for high availability and high reliability deployments. Capacity can be expanded on the fly without disrupting the storage system or applications. The FS7500’s comprehensive management system helps improve productivity and makes it easy to configure and manage iSCSI, CIFS and NFS storage with EqualLogic Group Manager and SAN HQ.

Dell also offers file sharing capabilities through entry level NAS products in our PowerVault systems as well as unified file and block storage for the Compellent SAN.

More info on our NAS range here: http://www.dell.com/us/enterprise/p/network-file-storage

You might also note the lack of EMC OEM products. Dell re-signed their reseller agreement with EMC for five years in late 2008, though talk of cracks widening between the two companies have persisted since well before then. The EqualLogic acquisition has pretty clearly replaced EMC gear as Dell’s storage product of choice for that market.

It was made abundantly clear on the day, without an explicit statement, that while Dell will continue to service their existing Dell | EMC customers, they won’t be pushing EMC’s product any more. The focus is most definitely on Dell owned brands, so I’d be surprised if the reseller arrangement survives much longer.

Success Claims

Dell made some pretty bold claims about their gear:

  • Number 1 vendor for virtual storage
  • Number 1 vendor for iSCSI storage
  • 100% (yes, 100%) of Compellent customers would buy again
  • Ocarina yields 57 times (yes, times) better data reduction than NetApp, with up to 90% lossless reduction in file size

I have no secondary sources for any of this, so take it with a grain of salt. Having said that, Rodney had some interesting things to say about Ocarina in the bus on the way to Dell.

Ocarina

Apparently the Ocarina software is based on some very clever maths that chooses an optimal compression algorithm depending on what sort of data it is. If it’s a JPEG, use algorithm A, if it’s a Word document, use algorithm B.

It gets better.

If you have a zip file with a Word document containing a spreadsheet with embedded graphics, Ocarina is smart enough to “open up” this data so it knows what the components are. It then uses the most appropriate algorithm on each different part of the data. That’s pretty cool, though I’d expect it to be quite CPU intensive. Using the one Lempel-Ziv variant on all your data isn’t going to be as good as something adaptive.

Dell said that they expect that the Ocarina software will be made available across all of their products at some point, but again, details are scant.

EqualLogic

Charlie came to Dell as part of the EqualLogic acquisition, and he gave us the low-down on what the product is about. I didn’t take much in the way of notes other than the following tidbits:

It can do real-time sub-LUN auto-tiering, but it can only use 15MB chunks because it’s not a 64bit product yet. I guess it’s using 32bit chips inside somewhere.

You can do “live volume” migrations between arrays, which is good to see, as I mentioned in the HP piece. Moving data around online is a critical feature for the modern data centre.

The software for it is only “purchased” once, though you will pay yearly maintenance. It’s per-array, so if you buy a new array, you need to buy the software for that array, but if you upgrade the array to something else, you don’t have to buy the software again.

The goal here from Dell is that they don’t want to make you do forklift upgrades. You should be able to upgrade components of your arrays, piecemeal, so you can keep your data online.

Mini-Rant

I’m going to indulge myself with a quick aside on this point, because I think it’s really important and horribly overlooked by most of the IT world. It’s a bit of a pet peeve of mine.

When you buy IT gear, it’s usually depreciated over 5 years. If you’re a horribly inefficient large enterprise, and many of them are, it’ll take you maybe 6 months to spec, order, take delivery of, install and put into production a chunk of expensive new kit like a storage array. The record for what I’ve personally observed is over 9 months.

Installation is thus less than 9% of the accounting life of the kit. It’s even less if you use gear for longer than 5 years, and many people do. And yet the amount of time people focus on the purchase and installation of new gear is vastly greater than the time that represents. The gear is on, and being operational, and needing maintenance for 91% of its accounting life.

But operational maintenance is boring and hard, and installing shiny new toys is fun and exciting. Which is why the shiny toys brigade shouldn’t be allowed to run your IT department.

</endrant>

So I’m well pleased that Dell are taking operational maintenance into account when designing their gear. This demonstrates real interest in the actual customer problems, not just the marketing hype-driven crusade for New! Shiny! Features!

I asked some tricky questions about the licensing, such as “is it transferable” which the Dell folks couldn’t answer for me at the time. I’m still waiting on answers, but when I get them, I’ll update this post.

Moving on…

Compellent

Compellent arrays do RAID using 2MB “pages”. Again, this is similar to the way XIV and 3PAR work, and I reckon it’s a more modern way of approaching the problem than simple physical spindle based RAID.

It means you can create data volumes that consist of more than one kind of disk (say, SATA and SAS) to get persistent caching for performance, while less-frequently used data goes on slower, cheaper disk.

There is a once-per-day automated re-balancing operation called “Data Progression” that optimises the page layout so the busy data goes on the faster disk. I asked if there was any way to manually tune things, or change the schedule, since a daily average calculation will smooth out spikes in load.

For many workloads, hotspots tend to occur in spikes, rather than a consistently hot area, and they’re often seasonal: bootup/shutdown at the start/end of the day, top-of-the-hour scans, that sort of thing. These will tend to get averaged down to nothing because they’re only temporary, but you often want to pin some of this data into faster disk or memory because you know you’re going to have a login storm again tomorrow morning.

Actually, I’d love to see an array smart enough to detect seasonality, so that it pre-emptively loads boot data into SAS ready for the morning login rush, and then pages it out to SATA for the rest of the day. Anyone working on something like that?

The Dell Enterprise Manager software is used to manage the arrays, and this is the same software used in the back end for Dell’s Compellent support service, CoPilot.

General Chat

We actually finished with the slides and deep technical stuff fairly early, and then had plenty of time for Q&A. The bloggers all commented that we felt that we’d covered a lot more material in less time than at HP.

Dell’s strategy is to move from being box droppers to solution providers, so they’re looking at the integration of the stack. They feel they’re well placed for it, because they’re traditionally quite strong in servers, but the storage business is growing fast for them. They’ve also just announced their intention to acquire a networking company, Force10, who have a good presence in high-end datacentre networking.

Dell’s actions are following their intentions well, so if they can successfully integrate the acquisitions into the main company, they should be well placed to start offering “stack” style solutions, which as we spoke about with DiData, seems to be where the industry is headed.

Dell sees automation and a reduction in complexity as important features of modern data centres. Automation wasn’t a major point when talking about the disk arrays themselves, beyond some internal capabilities for them to dynamically move data around for load-balancing reasons, so there’s some work still to be done here.

Similarly, there are multiple product lines with different branding, and different features, so an integrated or simplified approach will be a challenge. Dell don’t have an equivalent “validated stack design” a la vBlock, or the Cisco UCS/VMware/NetApp alliance, so there’s a risk that customers looking for a simple looking solution may bypass the complexity of a mix-n-match offering from Dell.

[email update from Dell:]

As well as the Scalent solution you’ve mentioned, we also touched on vStart during the discussion which has already launched in US and EMEA, soon to arrive in Asia Pacific (including ANZ J)

See www.dell.com/vStart

The Scalent acquisition was mentioned as forming a key piece of the puzzle. Expect to see more on this front as Scalent is integrated more fully into Dell’s Advanced Infrastructure Manager software, and it’s able to drive all the new hardware options from a central point of control.

I’d say Dell still have a long way to go before they take on other vendors outside of servers in the enterprise end of the market. They’ve made a lot of acquisitions in the short space of time, so the underlying blocks are there, but bringing new companies into the fold is always tricky, and doing several of them at once while trying to innovate is a big challenge.

Only time will tell how successful Dell manage to be.

Update 2011-08-10: Q and A

We asked some questions that Dell weren’t able to answer on the day, but David Holmes from Dell kindly emailed me with some followups. Here they are, quoted verbatim:

Q1. Can Dell Compellent perpetual licenses be transferred between companies / legal entities?

Perpetual licenses can be transferred between legal entities as long as the other Compellent System is decommissioned.  Another possible exception is if there is an acquisition of the company in question.

Q2. Does dynamic tiering identify regular hot spots (e.g. daily scheduled batch transfers) on specific LUNS or volumes and migrate those workloads?

Dell enables sub-LUN dynamic tiering based on frequency of access in Dell EqualLogic PS6000XVS and PS6010XVS arrays and the Dell Compellent Storage Center SAN.

The EqualLogic XVS arrays track the capacity in use by access frequency, and categorizes it into one of three classes: high I/O, medium I/O, and low I/O. Based on this categorization, the XVS evaluates the data residing within the array and determines relocates the I/O data to the appropriate tier, such as high I/O or medium I/O data on SSD and low I/O data on HDD storage.

The Compellent SAN dynamically moves enterprise data to the optimal tier based on actual use. The most active blocks reside on high-performance SSD, Fibre Channel or SAS drives delivering faster I/O, while infrequently accessed data migrates to lower-cost, high-capacity SAS drives. Frequently accessed data may also get cached, resulting in read ahead optimizations as well. If a block of data has progressed down to Tier 3, and is only accessed once per day (a backup for example) then the data will remain on Tier 3. If the block is accessed multiple times throughout the day then the software will move it up to higher tier. If a particular volume needs guaranteed performance it can be given a storage profile that effectively locks the volume into a particular tier of storage, such as SSD, or by RAID levels,  such as writes in RAID 10 and reads in RAID 5 for tier 1 only.

Also note that dynamic tiering is also available on the EqualLogic platform as described in this whitepaper

Using Tiered Storage in a PS Series SAN http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5239 ]

Q3. VMware VAAI developments on Storage

EqualLogic – currently supported

Compellent – block zeroing is supported, other primitives in development, stay tuned for more details.

Kudos again to Dell for providing detailed follow-up information. They definitely win the “Most Engaged with the Community” award.

Merch disclosure:

  • A large decaf skinny latte. Yeah, I know, why bother.
Bookmark the permalink.

2 Comments

  1. Not sure about Ocarina accuracy. I heard some Occarina guys on a InfoSmack podcast a year or so ago and they stated they use the Lempel-Ziv variant (LZ) compression algorithm across the board, nothing is special in their compression, but their secret sauce is in how they do the compression.
    I for one am a bit confused about

  2. Maybe they just use different LZ variants based on which one works better in the context?
    We didn’t cover Ocarina much on the day, and the slides don’t have much more detail. It’s not available until 2nd half of 2011, so hopefully we’ll hear more about it soon.

Comments are closed