Background on NetApp Dynamic Data Center

Lest people start thinking I’m an EMC weenie (no offense, guys), I want to point out some good info NetApp has been making public about their Dynamic Data Center stuff. There’s some really good detail buried in all the marketing guff.

Supernova

The Beginning

You’ll want to start here.

More specifically, you’ll want to read this paper on the solution at Telstra. That’s where it all began.

Go get it. I’ll wait. It’s only 4 pages, and page 3 is the one I want you to look at.

Let me draw your attention to the picture on page 3. That’s the guts of the thing, right there.

Horizontally scalable. Vertically scalable. Secure partitioning.

1.7 Petabytes of storage when it was published in early 2008. It’s much bigger by now.

That was a fun project.

I was reminded by a colleage recently that one of the great things about it was how we used (relatively) old technology to build something new and cool.

Quick Rundown

The switch gear is all Cisco, like it says. Gigabit or better, and dedicated for storage. You pick the models that are appropriate for your environment, and the fan-out ratio that suits.

A lot like your more traditional FibreChannel SAN, right? Hosts connect to it, only you don’t need special FC HBAs at $1k a pop. Standard Ethernet cards, times thousands of servers. Cost savings, right there.

NetApp Filers at the backend. Whichever ones you want. FAS 270 enough for you? Cool. Need the big 6080s? No problemo. They all run the same DataONTAP, and they all talk Ethernet, so you can mix and match depending on what you need. That’s really neat.

And ONTAP supports CIFS, NFS and iSCSI over Ethernet, so you get 2 types of file and 1 of block storage over the same wires, with no special translator droids to slow everything down. With FCoE, you can add FibreChannel. I don’t see the need, but hey, some people like it.

Need more storage? Add more Filers. Need more hosts? Add more switches. It really is very easy. I’ve done it, as have lots of other people.

Oh, and because it’s all NetApp, if you decide the 270 just can’t hack the load you’ve dumped on it, you can do this and move it all to the 6080.

The Grey Bit Is Important

Enough breathless hype.

You know how GUIs often grey out the parts you can’t use? Not the case here. The grey ellipse bit is the most important thing in that picture.

Let’s take a closer look at the grey bit, though.

The caption is VLAN + vFiler + volumes. Let’s see what that means:

VLANs

So it’s a layer 2 network. There is sooo much to write about this one area. For now, think about how VLANs work. If you don’t know, I encourage you to read up.

VLANs are really old technology, as far as networking goes. In the same way that hypervisors now give us multiple virtual machines on a single physical server, VLANs give us multiple virtual LANs over the same physical wires, instead of having to cable up each computer multiple times in order to isolate different kinds of traffic.

And that’s what they’re used for here. They logically isolate different project’s storage traffic from one another, so a web application in a project over on the left can’t access payroll data in the database project on the right.

That’s not the only bit of virtualisation going on here.

vFilers

NetApp’s MultiStore feature gives you virtual Filers. In the same way as VLANs gave us multiple LANs on the same piece of wire, vFilers give us multiple Filers on the one physical chassis.

As far as each project is concerned, the vFiler looks just like their very own Filer. They get storage over NFS, or CIFS, or iSCSI, over their storage network (in a VLAN, remember!) so it looks just like they have their own dedicated hardware.

Only they don’t. Which is heaps cheaper and easier to manage. You can buy a bunch of disk in a single, say, FAS3050A, and then slice it up into virtual pieces, and give a slice to each project.

Volumes

Back in the 6.5.x days, TradVols were a tad unwieldy, but still better than mucking about with host based volume managers. When FlexVols arrived, wow. Soo much better.

You just buy a whole mess of disks, and put them into dirty big aggregates. Lots of spindles == better performance. And then you slice up the aggregate into volumes. All the volumes get to benefit from the high spindle count, so they all get to go faster.

And you can have different volumes on the same aggregate belonging to different vFilers. So you might only have 2 aggregates, but you can have 20 projects all sharing disk in the back end, but all getting the benefit of 50-odd spindles (model dependent, of course). And they still all look like they have their own storage.

RAID-DP protects the data, and a few spare disks gives you hot swap. Fault tolerance with great performance. What’s not to love?

How Is This Cloudy?

When you build a system like this, the storage and network become commodity pieces. Cloud computing is all about economies of scale.

The end customer doesn’t need to care what model of Filer their storage lives on. They don’t even need to know which physical Filer it lives on. You can move stuff around, add more, take some away, with far less disruption to the end customer than if they had dedicated gear.

Half a dozen customers (or more) can all be sharing hardware and never even know. Because they don’t have to.

Now they can get on with writing new applications instead of managing storage.

But Wait, There’s More!

More fun stuff coming up in future posts. It’s not all smooth sailing, so I’ll give you some pointers on how to avoid the traps for new players.

TrendsMap don’t use NetApp storage, as far as I know. A guy I know worked on the site, apparently, and I just thought it was cool and wanted to share it with you.

Image by TopTechWriter at Flickr

Bookmark the permalink.

Comments are closed.