This is part of my series on Kubernetes From Scratch.
“If you wish to make an apple pie from scratch, you must first invent the universe.”
— Carl Sagan
While we’re not quite creating a universe out of nothing, we do need some sort of computing infrastructure to start this little project.
As we discussed in the introduction, there’s a tendency for cloud-native/DevOps people to hand-wave away the underlying infrastructure because someone else has already solved that problem for them. Even Kelsey Hightower’s excellent Kubernetes The Hard Way makes use of IaaS providers, which is fine if that’s the direction you’ve chosen, but I want to dig into the implications of doing that.
What do you do if you can’t (or won’t), for whatever reason, use a public cloud IaaS provider like AWS, Azure, Google? What value are they really providing by taking care of all the infrastructure bits and pieces for you? If we know the answer to these questions, then we can better assess whether public cloud, all on-site, or a mix makes sense for a given deployment.
The minimum infrastructure we need is some amount of CPU, RAM, storage, and networking. The barest of bare-metal ways to do this is a bunch of servers, like what we used to do back in the early 2000s with the dot-com/dot-bomb boom: racks and rack of Sun Solaris pizza boxes, Cisco Catalyst switch gear, and a bunch of CAT-5. I’m not going to go to quite that extreme in my lab, mostly because I don’t have the number of physical servers we’d need.
What I do have is a cluster of Scale Computing HC3. I have three HC1000 nodes with a total of 12 CPU cores, 96 GB of RAM and 6 TB of HDD storage to play with, so if we carve this up into a few virtual machines, we can have a pretty decent simulation of a fleet of physical machines. It’s all connected with a single Dell N1524 switch that also runs my production home office, but with the magic of VLANs it can be isolated from the rest of the network pretty easily.
This is pretty similar to spinning up a logical lab on your laptop using containers or VMs, but I have a lot more room to play with on this cluster than any of the other gear I currently have.
Why go this way instead of using cloud instances? Mostly because it’s there, and I can. It will also highlight all the things that the cloud abstraction of physical servers deals with for you, which is part of the research goal. Part of what I want to look at is the impact on our setup of moving physical infrastructure around when we get into the lifecycle management part of things. I’m cheating a bit, because the HC3 platform already partly abstracts the physical hardware away, but we’re going to pretend that the VMs we use are physical servers when we set up the environment. That way what we set up should be fairly portable to a physical bare-metal environment, or some other VM hosted environment in any given cloud.
I also have a couple of old laptops laying around that could be pressed into service in a pinch, and that might be fun to test out what a heterogeneous environment looks like and how well it works. We’ll leave that until near the end.