This is part of a series on the NetApp Dynamic Data Center.
If you grabbed the document from my last post about the NetApp Dynamic Data Center, you’ll have noticed the concept of a Project.
Figuring out what exactly a Project is can be one of the hardest parts of using this architecture.
Small, Medium or Extra-Large?
The problem stems from each organisation running their business differently from others. It’s hard to do one-size fits all when there’s such a wide range of sizes.
Some businesses have applications that are basically silos. They don’t share any information with each other. If they do, it’s through some sort of specialised brokered channel.
Other businesses have applications that are more tightly coupled, sharing data with each other all the time, often in a fairly informal manner.
You can use this model with either end of the spectrum, but it’s important to get your idea of a Project straight in your head first.
Let me explain.
It’s All About the vFilers
The vFiler is the core piece of this system. You want everything that’s connected to a given vFiler to be able to share data.
If you have two hosts connected to the same vFiler, sharing data between them is relatively easy. If they’re connected to different vFilers, you have to do some more complicated network things to make it happen. More complex is bad, because it makes it harder to look after.
The whole point of using this stuff is to get economies of scale by doing things in standard ways. If you go breaking the model, well, why did you buy it in the first place?
Each vFiler belongs to one storage VLAN at a time. This is what gives you your secure isolation from other projects. It’s a feature. The trick is deciding on the level of isolation you want.
Which brings you back to the vFiler, and what needs to share.
The simple rule is: if you want to share data, ever, you’re in the same vFiler. If you put things in different vFilers, you’re explicitly preventing things from sharing storage.
Everything flows on from that.
Types of Sharing
Here are some examples to make this easier to understand:
- The company website; a 5-way n+1 tier of webservers, 3 app servers and a database server, are all on one vFiler. This is one Project.
- The payroll system; 3 application servers and a clustered database, are on another vFiler. This is another Project.
- The Exchange servers for company mail, and the Symantec Enterprise Vault servers, have another vFiler. This is your email Project.
Here we have isolation vertically, by application.
We can share data between the database servers and the application servers via the storage. Some companies write their apps like this so they can write static HTML out from the application servers directly to the filesystem used by the webservers to serve traffic.
Some of the downsides of this approach are issues with multiple-physical-site hosting, and a proliferation of ‘projects’ requiring a lot of vFilers, VLANs, and networks to be set up.
But you get high security, and problems within one vFiler/VLAN/network only affect one application.
- All the webservers are in one vFiler. A webserver Project.
- Internet facing application servers are in another vFiler. Internet app Project.
- All internal Solaris servers share another vFiler. Solaris Project.
- All the Windows SQL Server boxes are on another vFiler. SQLServer Project.
Here we have isolation horizontally by server function.
This can work really well, because you can isolate some servers that are in low-security networks (Internet facing, DMZ), and you can share common data between servers with the same function. Boot-from-SAN for all your Unix servers, great de-dupe of common files, a common binaries area (a couple of copies of Oracle instead of one per server, for example), there are a bunch of neat applications of this approach.
The downside is lower security isolation, though there are some techniques to help with this. You also have more eggs in your vFiler baskets, so you need to take better care of them.
Plan, Then Execute
A good place to start is to look at who the administrators for the servers are. They have root access, so they can munge anything on the servers anyway. Group all the servers with a common administration team together. Keep your internet and DMZ servers separate from everything else.
The important thing to remember is that you need to make a decision of which approach to use before you start. These two methods are quite different, and you can’t use both at the same time.
If you start with one and then decide to change 8 months later, you’ll have a bunch of production servers that need to have their storage designs changed. Bummer.
It’s rare that you’ll be able to do that without incurring an outage. And it’s an outage that has almost no value to the businesses who are using their applications quite happily, thank-you-very-much. Way to win friends and influence people, dude.
Instead you have a bunch of legacy stuff using one method, and a bunch of newer stuff using a different method. And your Ops Manager hates you.
This is enterprise cloud computing we’re talking about, so you’re not expected to know exactly what to do. Seek the advice of people who’ve done it before.
Get a hold of your NetApp rep and get them to bring in someone from their DDC/cloud team. I know quite a few of them, and they’re smart, helpful people.
If you can, talk to existing customers about their experiences.
Or, if you get desperate, you could email me. ;)
But if you do nothing else, at least take a few minutes to think about how your organisation shares data.