Time To Rethink The Cybersecurity Dictatorship

I think the current approach to cyber security is wrong.

Our current approach is—mostly—authoritarian, dictatorial, and undemocratic. It has been imposed on organisations in a top-down, centralised fashion. Mass-surveillance is seen as an uncontroversially good thing, as is centralised control over systems. Most security systems are administered by an all-powerful central nexus, and the people working with computers are expected to obey these centrally defined security practices. Lack of obedience is viewed as deviant behaviour that should be corrected, usually through shame and/or punishment.

The model for IT security follows the same kind of carceral approach as that used for policing and law enforcement in major Western nations. This is unsurprising when we look at where most IT technology is created, and by whom. Cybersecurity may have had rebel outcast roots in the early Hackers days, but Zero Cool and Acid Burn took VC money and they sell their software to Agent Gill now. Look around you. Cyber-security is dominated by by ex-law enforcement and military types.

And I think this has unnecessarily limited our thinking on what approaches to security are possible. The current state of affairs isn’t a natural one any more than fully distributed PGP web-of-trust designs would be. They are design choices we can make.

I think we made the current choices mostly by accident, not on purpose.

What if we made different choices? What could that look like?

Power and Responsibility

Those with greater power to change a system should be more responsible for its flaws. Links are designed to be clicked, so if clicking one is harmful, why is that the fault of the person using the system as designed and not the system designer? Particularly when the system costs customers millions of dollars a year.

We have a very top-heavy approach to security. We rely on authoritarian control from a few key vendors to improve the security of their products, and the entire structure of the system relies on these few vendors to do a good job. Yet when they fail, the consequences fall almost entirely on the people with the least ability to improve the situation. It’s not like I can patch my own iPhone. This seems wrong-headed.

In other product categories, dangerous flaws in the product result in a recall, but we’ve somehow allowed software to bypass the existing regulatory regime. There is an argument this should be permitted for software you write for yourself or give away for free, but for commercial software sold by trillion-dollar global corporations? Shouldn’t the bar be set just a little bit higher?

What if we instead forced Microsoft and Apple and Cisco and IBM and AWS to take responsibility for the products they sell? No one forced them to include vulnerable log4j code in their expensive products, but somehow it’s the fault of volunteers working mostly for free to find and fix the problems?

Or, okay, you don’t want to have the responsibility for keeping things secure. Fine. Then you also don’t get to tell me what I can or can’t run on devices I paid for. You can’t stop my device from working if I decide I want to stop giving you money. You don’t get to have it both ways: either it’s my thing, or its your thing.

Or, maybe it really is more complex, and it’s a shared responsibility, not an individual responsibility. That will require a radical rethink of how the current approach to cybersecurity.

Harm reduction, not crime and punishment

What if cybersecurity isn’t a problem to be solved but a tension to be managed? What if the best we can hope for is harm reduction?

What if, instead of centralising control and surveillance, we valued people’s autonomy and privacy? What if those who are in control of the systems are the ones on the hook for making them safe enough for ordinary people to use? What if we taught people to wash their hands, but we also made sure the water wasn’t poisonous?

What if we designed systems where clicking on a link didn’t expose you to malware? What if we didn’t need to spy on people constantly in the name of ‘security’? What if we entrusted people with tools they need to do their work, or to play, or whatever it is they’re trying to do, and worked with them to make doing that safe? What if ‘safe’ wasn’t just whatever corporate leadership thought mattered, but what employees cared about, too?

It would mean those who currently get to command others to obey their directives would need to learn to influence rather than control. I’m not hopeful, because this would require a massive shift in power dynamics. Those who currently enjoy a great deal of power and privilege would need to give it up, and humans tend not to do this voluntarily. Individuals sometimes do, but en masse? Nope. And given that the tech industry is dominated by rich, mostly-white, men? What does history tell us is the likely outcome here?

We have zero hope of changing anything if we can’t even consider alternatives to what we’re currently doing. Discussion of alternate power structures is happening, of course, it’s just not being done by the people currently in charge.

Why should we keep going the way we have, given the track record of the past couple of decades? What makes the current approach so worthwhile that we can’t even try out alternate ways of doing things?

Bookmark the permalink.

Comments are closed.