The centralised command-and-control method doesn’t work at scale. Many people have discovered this independently, from the Romans to the Chinese. It’s just not humanly- (or even computerly-) possible to track that many interdependent variables and make smart decisions.
To manage a vast array of components, a central apparatus needs to understand every detail of the entire whole. You need to direct every single peasant about how many shoes to make, or what to plant (and when! and when to fertilise! and when to harvest!) and if you don’t get it all right, the entire system collapses under its own weight.
Instead, delegation works better. There is an overriding common goal, but groups of individuals, or elements, are able to make their own decisions (within certain spans of control) to propel the overall organisation towards the goal.
The trick is deciding how much to delegate, and how much to control. It’s not a binary choice, but a continuum.
This delegation choice correlates quite closely to the “promise theory” that is at the core of Cisco’s ACI, as explained to me by Joe Onisick and his betters. The principle sounds simple: define the overall goal, and let sub-groups figure out how to achieve it. Define overall policy, and let the switch (or webcam, or IP-phone, or firewall, or whatever) implement that policy based on how it needs to.
Promise Theory and Management
This is very similar to corporate management. You need to define sane, and well formed, overall policy for sub-units to implement. If the policy is insane, or contradictory, or just bad, then the most dutiful implementation will still result in failure. If you tell a firewall to allow all inbound traffic no matter the source, you’re going to have a bad time.
But if we assume the policy is good, computers are very good at following orders. They are deterministic, which means they only interpret policy in one way. They may be complex, sometimes vastly so, but they are not random.
Humans, however, are vastly more complex than computers. They can be chaotic, if not completely random, when compared with the regular performance of computers. A computer is rarely perverse: it will do what it is told (sometimes annoyingly so) while a human may decide to do the exact opposite of what they are instructed, because they feel like it.
Now add in the human propensity to misunderstand. Unlike a computer, which does only what you tell it to do, human language contains a vast capability to miscommunicate. What you think you are defining as policy may not be what a group of people (or an individual) think you mean. Communication between humans is lossy.
The courts are filled with cases of humans thinking different things about what was promised. These disagreements are frequently not rational, in the economic sense or indeed in the usual sense. Contract law exists largely because of this propensity of humans to misunderstand one another, sometimes deliberately.
While the lessons of scaling communication between many computer systems may well be applied to human communication, perhaps there is a lesson from human interaction for computer interaction as well?
After all, many billions of humans have been co-existing on this planet, more or less, for hundreds of years.
It hasn’t all gone well.
I think the potential for promise theory is very large, but it’s also quite recent, and the full impact of it hasn’t been quite fleshed out yet. Distributed systems are extremely complex to understand, and many people vastly smarter than I are attempting to wrangle them into usable shape.
To dismiss ACI, and promise theory, out-of-hand seems terribly foolish to me. But then so does assuming it is a panacea. It wasn’t than many years ago that separation of command and data was proclaimed as the new heir to the networking throne, so perhaps we should wait a little longer before we swear fealty to promise theory, or ACI, as our new King?
Great articles. I very much appreciate my ‘betters’ helping me to grasp the limited amount I do about promise theory and overall control theory.
I agree wholeheartedly that diving into anything new as a ‘panacea’ is foolish. Customer should evaluate options and see what works for their environment and scale. With ACI we’ve had many customers do so (from small scale to SP/IaaS scale.) They’ve come out happy, and are moving forward with ACI.
I’d be careful with comparing promise theory as a control methodology to OpenFlow as a network technology. It’s definitely apples and oranges. Additionally OpenFlow has the problem of trying to centralize a distributed system that has been functioning quite well for years while the rest of the world is trying to distribute centralized systems.
From the network forwarding perspective alone, ACI does not modify distributed forwarding, because forwarding packets has never been the problem. ACI instead centralizes the application of policy, and does so using promise theory as the policy application methodology.
At a minimum ACI allows policy to be applied on a tenant or application basis, rather than trying to manually configure it box-by-box and hope you come out with a working system.
Joe Onisick – TME Cisco INSBU
Thanks for commenting, Joe.
I suspect that we’ll end up with some sort of hybrid system in the end: tight, declarative control for systems where discretion is not wanted or useful, and looser, policy based control for everything else.
The trick will be in getting the devices implementing policy to behave predictably, and for the policy definition to be powerful enough to be useful, but easy enough to do that it will result in predictable device behaviour.
Is there such a thing as a policy based programming language? It’s probably lisp. Everything seems to come back to lisp.