Fixing the IoT DDoS Threat

In light of the release of the Mirai botnet code, and the news historically massive DDoS attacks are being driven by poorly secured Internet of Things (IoT) devices, I was musing on how to address the issue this morning, and had an idea.

A big problem with IoT devices is their embedded nature. They are designed to operate untouched by human hands. There are, or soon will be, many more of them than PCs and smartphones. And yet, inside them all, is a small computer that connects to the Internet.

The problem with these embedded devices is that they are made by humans, and therefore flawed. Some of those flaws are security flaws that allow them to be taken over by nefarious people for nefarious ends. Again, this isn’t different from standard PCs, which are also used in botnets, but IoT creates new challenges of scale and scope.

IoT has a much higher risk from abandonware. The company that made the device goes out of business. Or the product is end-of-life and no longer officially supported or patched. What now? What happens if a massively successful product from three years ago has a major flaw that allows attackers to enslave them in a DDoS botnet? How do we fix that?

Threat to Others

The scale of the IoT DDoS problem is a threat to everyone on the Internet, and the Internet is very close to becoming an essential service in many economies. What happened to Brian Krebs could happen to anyone. Banks, government services, hospitals, you name it.

In economic terms, IoT is creating a negative externality. Since these devices pose a threat to everyone, we need to adjust the incentive structures so that the true costs of securing the devices are borne by those creating them, which isn’t the case at the moment as far as I can see.

IoT devices don’t get patched. Just like back in the very early days of Windows, patching was sporadic and poorly organised. With Windows 10 it’s virtually mandatory, and from a security perspective, that’s a good thing. Linux devices can also be configured to auto-apply updates that are flagged as security updates, though I don’t think it’s turned on by default (yet).

Under the Australian Consumer Law, products you buy need to be of ‘acceptable quality’, which includes that they:

  • are free from defects. I would argue that security flaws in IoT devices are a defect.
  • are safe. I would argue that being trivially pwnable and able to join a DDoS botnet is unsafe.
  • are durable. This is a trickier area. People replace smartphones all the time, but what about smartmeters? Security cameras?

I’m not a lawyer, and to my knowledge, there isn’t established case law regarding IoT software flaws as contravening the ACL, but I’d like to see some. The ACL does appear to cover some of the problem, but the nature of embedded IoT devices creates an issue around the durability aspect.

Now one could argue that a company is responsible for patching while the devices are embedded and working, but that could be a very long time, and in a market where the expected lifetime of the object is smaller than it is in practice. There are medical practices still running Windows XP desktops, for example. Should Ford still be liable for problems with the few Model-Ts that are still functional?

That’s probably unreasonable, but if you want to drive a Model-T on the public roads, it still needs to be compliant with a set of safety standards. Why should devices that want to operate on the public Internet be any different?

What if there are some minimal safety standards that devices must adhere to if they are to operate on the Internet?

There are a couple of wrinkles in this approach. Those who run a Model-T tend to be enthusiasts, and it’s much easier to police any violations. Those who run an 8-year-old ADSL modem are just using something they paid for that still works. While I like the idea of “minimum Internet safety” in the long run, in the short run it’s probably not workable because the burden would end up being on consumers, not the companies who make the insecure products.

And let’s be fair. It’s hard to secure a device against a flaw that hasn’t been discovered yet. There’s plenty more device makers could be doing to make better embedded software in their products (looking at you, Samsung) but if no one knows about the flaw yet, it’s harder to secure against than something we do know about. And people will–rightly in my opinion–focus on the known flaws first.

Open Source The Problem?

A major issue with all these devices is that no one can fix them. The companies who made them won’t, because it’s end-of-life and there was no legal requirement to do this, so it wasn’t costed into the product. That creates a liability after the fact, which is legally problematic and the major multinationals will fight it tooth and nail.

What if, for any new devices entering the market, a company is made liable for security patching for as long as the device remains under active support. That’s no different to how I feel the ACL should operate, so no big deal, right?

But what if companies were required to open-source the software on their devices when they want to stop actively supporting them? If a company isn’t going to release security patches for the code on that device, then we, as a society, need to be able to fix problems with it in the future, to protect us from your company’s device becoming a botnet slave. If we have the code, at least someone could patch it and release new code for people to apply.

This doesn’t help us for companies that release a few hundred devices and then go bankrupt. It doesn’t help us for all the devices that are already there, unless the “release the code” decree is made retroactive, and that only works for companies that still exist.

But it might help us to stop the problem from getting worse while we figure out a better overall solution. If we spend all our time arguing about what the perfect solution is, the problem itself will become less and less tractable.

I’d rather see a few considered, but not over-thought, proposals tried out than to do nothing. Add sunset clauses so that if the solution doesn’t work, then we stop trying to use it.

We’re going to have to do something, or the Internet will continue to be clogged by spam and DDoS traffic that we all pay for indirectly.

Bookmark the permalink.

Comments are closed.