The centralised command-and-control method doesn’t work at scale. Many people have discovered this independently, from the Romans to the Chinese. It’s just not humanly- (or even computerly-) possible to track that many interdependent variables and make smart decisions.
To manage a vast array of components, a central apparatus needs to understand every detail of the entire whole. You need to direct every single peasant about how many shoes to make, or what to plant (and when! and when to fertilise! and when to harvest!) and if you don’t get it all right, the entire system collapses under its own weight.
Instead, delegation works better. There is an overriding common goal, but groups of individuals, or elements, are able to make their own decisions (within certain spans of control) to propel the overall organisation towards the goal.
The trick is deciding how much to delegate, and how much to control. It’s not a binary choice, but a continuum.
This delegation choice correlates quite closely to the “promise theory” that is at the core of Cisco’s ACI, as explained to me by Joe Onisick and his betters. The principle sounds simple: define the overall goal, and let sub-groups figure out how to achieve it. Define overall policy, and let the switch (or webcam, or IP-phone, or firewall, or whatever) implement that policy based on how it needs to.
Promise Theory and Management
This is very similar to corporate management. You need to define sane, and well formed, overall policy for sub-units to implement. If the policy is insane, or contradictory, or just bad, then the most dutiful implementation will still result in failure. If you tell a firewall to allow all inbound traffic no matter the source, you’re going to have a bad time.
But if we assume the policy is good, computers are very good at following orders. They are deterministic, which means they only interpret policy in one way. They may be complex, sometimes vastly so, but they are not random.
Humans, however, are vastly more complex than computers. They can be chaotic, if not completely random, when compared with the regular performance of computers. A computer is rarely perverse: it will do what it is told (sometimes annoyingly so) while a human may decide to do the exact opposite of what they are instructed, because they feel like it.
Now add in the human propensity to misunderstand. Unlike a computer, which does only what you tell it to do, human language contains a vast capability to miscommunicate. What you think you are defining as policy may not be what a group of people (or an individual) think you mean. Communication between humans is lossy.
The courts are filled with cases of humans thinking different things about what was promised. These disagreements are frequently not rational, in the economic sense or indeed in the usual sense. Contract law exists largely because of this propensity of humans to misunderstand one another, sometimes deliberately.
While the lessons of scaling communication between many computer systems may well be applied to human communication, perhaps there is a lesson from human interaction for computer interaction as well?
After all, many billions of humans have been co-existing on this planet, more or less, for hundreds of years.
It hasn’t all gone well.
I think the potential for promise theory is very large, but it’s also quite recent, and the full impact of it hasn’t been quite fleshed out yet. Distributed systems are extremely complex to understand, and many people vastly smarter than I are attempting to wrangle them into usable shape.
To dismiss ACI, and promise theory, out-of-hand seems terribly foolish to me. But then so does assuming it is a panacea. It wasn’t than many years ago that separation of command and data was proclaimed as the new heir to the networking throne, so perhaps we should wait a little longer before we swear fealty to promise theory, or ACI, as our new King?