The Eigencast 021: Productivity Through People

The Eigencast


Yadin Porter de León

Justin talks to Yadin Porter de León, Head of Content Marketing at Druva Inc.

They discuss how technology companies like to push the productivity cost savings from using their technology, and yet those cost savings never seem to involve firing people. Thus they are really about enhancing the value side of the equation, and now about saving cost, but that’s not what they talk about, which is odd.

They talk about how there are lots of average managers, and when they go to cut costs, they often choose highly visible internal things to cut instead of more substantial (and more difficult) savings. A company in a death spiral will be removing free snacks, buying cheap coffee and scratchy toilet paper, rather than getting rid of under-performing managers or making hard business decisions.

The discuss how management is a skill, which requires both theory and practice to get good at. People aren’t just born as great managers, any more than people are born being great at programming in C.

They also talk about the cargo cult of startup management, where people blindly copy the practices of other successful companies without any understanding of whether or not they actually work. Worse, many startups attempt to abandon management altogether in an attempt to avoid the mistakes of others, and then missing out on all the benefits that good management provides.

There’s are reasons for organisations to have managers. Not knowing those reasons isn’t a good reason to try to do without management.


  • 00:00:00.000 Intro
  • 00:00:15.856 Episode Intro
  • 00:02:29.295 Interview
  • 00:06:14.900 Productivity vs Firing People
  • 00:10:04.650 Death Spiral
  • 00:11:57.186 Bathroom Judging
  • 00:15:31.575 Management is a Skill
  • 00:19:55.865 Startups and Management
  • 00:24:40.866 Outtakes




This episode of The Eigencast was sponsored by PivotNine. Research, analysis, advice.



Fixing the IoT DDoS Threat

In light of the release of the Mirai botnet code, and the news historically massive DDoS attacks are being driven by poorly secured Internet of Things (IoT) devices, I was musing on how to address the issue this morning, and had an idea.

A big problem with IoT devices is their embedded nature. They are designed to operate untouched by human hands. There are, or soon will be, many more of them than PCs and smartphones. And yet, inside them all, is a small computer that connects to the Internet.

The problem with these embedded devices is that they are made by humans, and therefore flawed. Some of those flaws are security flaws that allow them to be taken over by nefarious people for nefarious ends. Again, this isn’t different from standard PCs, which are also used in botnets, but IoT creates new challenges of scale and scope.

IoT has a much higher risk from abandonware. The company that made the device goes out of business. Or the product is end-of-life and no longer officially supported or patched. What now? What happens if a massively successful product from three years ago has a major flaw that allows attackers to enslave them in a DDoS botnet? How do we fix that?

Threat to Others

The scale of the IoT DDoS problem is a threat to everyone on the Internet, and the Internet is very close to becoming an essential service in many economies. What happened to Brian Krebs could happen to anyone. Banks, government services, hospitals, you name it.

In economic terms, IoT is creating a negative externality. Since these devices pose a threat to everyone, we need to adjust the incentive structures so that the true costs of securing the devices are borne by those creating them, which isn’t the case at the moment as far as I can see.

IoT devices don’t get patched. Just like back in the very early days of Windows, patching was sporadic and poorly organised. With Windows 10 it’s virtually mandatory, and from a security perspective, that’s a good thing. Linux devices can also be configured to auto-apply updates that are flagged as security updates, though I don’t think it’s turned on by default (yet).

Under the Australian Consumer Law, products you buy need to be of ‘acceptable quality’, which includes that they:

  • are free from defects. I would argue that security flaws in IoT devices are a defect.
  • are safe. I would argue that being trivially pwnable and able to join a DDoS botnet is unsafe.
  • are durable. This is a trickier area. People replace smartphones all the time, but what about smartmeters? Security cameras?

I’m not a lawyer, and to my knowledge, there isn’t established case law regarding IoT software flaws as contravening the ACL, but I’d like to see some. The ACL does appear to cover some of the problem, but the nature of embedded IoT devices creates an issue around the durability aspect.

Now one could argue that a company is responsible for patching while the devices are embedded and working, but that could be a very long time, and in a market where the expected lifetime of the object is smaller than it is in practice. There are medical practices still running Windows XP desktops, for example. Should Ford still be liable for problems with the few Model-Ts that are still functional?

That’s probably unreasonable, but if you want to drive a Model-T on the public roads, it still needs to be compliant with a set of safety standards. Why should devices that want to operate on the public Internet be any different?

What if there are some minimal safety standards that devices must adhere to if they are to operate on the Internet?

There are a couple of wrinkles in this approach. Those who run a Model-T tend to be enthusiasts, and it’s much easier to police any violations. Those who run an 8-year-old ADSL modem are just using something they paid for that still works. While I like the idea of “minimum Internet safety” in the long run, in the short run it’s probably not workable because the burden would end up being on consumers, not the companies who make the insecure products.

And let’s be fair. It’s hard to secure a device against a flaw that hasn’t been discovered yet. There’s plenty more device makers could be doing to make better embedded software in their products (looking at you, Samsung) but if no one knows about the flaw yet, it’s harder to secure against than something we do know about. And people will–rightly in my opinion–focus on the known flaws first.

Open Source The Problem?

A major issue with all these devices is that no one can fix them. The companies who made them won’t, because it’s end-of-life and there was no legal requirement to do this, so it wasn’t costed into the product. That creates a liability after the fact, which is legally problematic and the major multinationals will fight it tooth and nail.

What if, for any new devices entering the market, a company is made liable for security patching for as long as the device remains under active support. That’s no different to how I feel the ACL should operate, so no big deal, right?

But what if companies were required to open-source the software on their devices when they want to stop actively supporting them? If a company isn’t going to release security patches for the code on that device, then we, as a society, need to be able to fix problems with it in the future, to protect us from your company’s device becoming a botnet slave. If we have the code, at least someone could patch it and release new code for people to apply.

This doesn’t help us for companies that release a few hundred devices and then go bankrupt. It doesn’t help us for all the devices that are already there, unless the “release the code” decree is made retroactive, and that only works for companies that still exist.

But it might help us to stop the problem from getting worse while we figure out a better overall solution. If we spend all our time arguing about what the perfect solution is, the problem itself will become less and less tractable.

I’d rather see a few considered, but not over-thought, proposals tried out than to do nothing. Add sunset clauses so that if the solution doesn’t work, then we stop trying to use it.

We’re going to have to do something, or the Internet will continue to be clogged by spam and DDoS traffic that we all pay for indirectly.

CFD1 Prep: Cisco

cisco_logo-svgCisco are well known to me. but the topics they’ll apparently discussion with us are not.

Container networking is on the agenda, which should be really interesting, given how Cisco is seemingly so tightly welded to its hardware when containers are almost completely abstracted from the hardware. Networking containers is going to become a big deal, because of two major trends (that I just wrote about for an upcoming issue of CRN Australia incidentally): the decoupling of networking hardware and software, and the rise of automation and orchestration.

Cisco is already moving in the software-defined direction with UCS and ACI, but it’s still centered in hardware. There are loads of startups that are working on the “switches are just servers with lots of Broadcom Trident II chips” approach backed by the Open Compute Project and ONIE. Just like we’ve seen in server land, the purchase of hardware is separate from the software selection. Linux and Windows will both run on a variety of x86 based hardware from a variety of vendors, and that fact has spawned a huge number of startups doing software and HCI type things, not least of which is VMware.

And as we saw with the decoupling of operating system from hardware that virtualisation brought us, we’re starting to see virtual networking operating systems pop up. I expect to see container-based version of the idea as well. Imagine if a firewall config change was a rebuild/recompile and deploy the way Docker applications are done today? What about a BGP route-reflector?

We’re also going to see a sprawl in container-like entities an order of magnitude worse than what we have now with virtual machines. They’re small and designed to be deployed en-masse. Of course we’re going to see loads of them sitting out there doing whatever it is they do, and they’ll all need to be networked somehow. The only way to cope with the sheer volume will be through automation, because humans just can’t handle that sort of scope cost-effectively, and we’re already seeing IT staff-to-device ratios come way down.

This is as it should be, because manually updating ACLs or routing tables entries is boring and humans are bad at it. I’m still somewhat agog at how long it took the networking world to ditch telnet for ssh, and even then the CLI continues to rule supreme when it’s a tedious and error-prone way to configure hundreds of devices. I recall using Tcl/Tk script to automate MPLS VPN rollouts back 15 years ago (also the Java-based hell of Cisco’s VPNSC product, but let’s not go there) so why oh why isn’t everything REST API based already?

Happily Cisco have some modern tales to tell here, as they acquired cloud management software startup CliQr not long ago. That’s also on the agenda for a chat, and I really want to dig into this more, since I’d just heard of CliQr before the acquisition.

There’s also Metapod, Cisco’s converged insfrastructure/cloud-in-a-box version of OpenStack, so that could well be interesting, not least to get a handle on how people are deploying OpenStack in the enterprise, and how it links into existing systems.

How Cisco is going to pack all this into their session will be a challenge, but I look forward to it.

CFD1 Prep: Druva


Druva is completely new to me, which is always fun.

A quick bit of research shows they’re aiming at the backup and recovery market, and the cloud angle comes from the software backing things up to either AWS or Azure. They appear to handle servers, VMs, endpoints (phones and tablets) as well as file sync and share, and cloud-based data like Office365,, Google Docs, etc.

It looks like the secret sauce is some form of global deduplication to reduce the amount of data that needs to move over the WAN, which is an important hurdle that is often overlooked in bandwidth saturated places like the US (although tell that to T-mobile in the Valley, hah!).

There are two main sub-brands: Druva Insync, which seems to be the backup and recovery product aimed at laptops, mobile devices, and SaaS data, and Druva Phoenix, which looks like a more typical server type backup and DR type product that backs up servers and databases, and converts VMs into Amazon Machine Images so you can start them in the cloud if the primary goes pop. It looks like there’s an on-site version as well, but it’s called On-Premise, so automatic fail there.

The trouble is, I’m having trouble figuring out how Druva does all this. The Druva website is full of fluffy marketing writing, but is very short on actual details. It looks very slick and shiny, but there’s not a lot of depth here. I’ve even gone so far as to download some of the datasheets, but they’re similarly fluffy.

You can tell Druva Takes Security Seriously™ because of all the badges they have on their website. It’s certified secure! It’s enterprise grade! It uses something called Envelope Security, so I’m totally convinced.

I gave up some fake information to download a 451 Research analyst report from the website, which is more than a year old now, and it said that Druva came from endpoint backup land and was specialising in governance and chain-of-custody type legal hold stuff. That has a lot of appeal to executives with money, and I concur with the author of the report that this would contribute to Druva’s ability to differentiate and win large enterprise accounts.

The report appears to date from before right about the time Phoenix was added to the portfolio, and so server backup and DR was still very new for the company, as was the Office365 integration. That was over a year ago, so I’d expect quite a bit of progress has been made since, but I get the impression that the company marketing leading the product, certainly going by the website.

No doubt we’ll dig into the details during CFD1 and try to get some clearer answers about what Druva really does and what makes it special.

I swear, every time I turn around another four backup and recovery companies spring up as if from nowhere. There seem to be so very many lurking out there, it’s really hard to differentiate. Perhaps there will be a Great Reckoning, as is coming for the all the primary storage companies, and we’ll see some consolidation. I know there’s already been some, with EMC buying Spanning Backup, and Datto bought Backupify not too long ago, so perhaps the time of Reckoning is already looming.

CFD1 Prep: Scality


I was only peripherally aware of Scality until last week when I attended the Scality presentation at Tech Field Day Extra at VMworld 2016. I came away impressed.

Jerome Lecat is an entertaining presenter, but the product is what impressed me. Scality make a software-only scale out storage solution called the RING, so-called because of the ring-topology at the heart of its architecture.

I’ve dug into the details courtesy of a technical whitepaper you can get from the Scality website for the low, low price of fake contact details. It’s a fairly straightforward multi-layer architecture where each software layer performs a specific function.

The protocol at the core of the RING architecture is based on Chord, introduced in a 2001 paper from MIT [PDF]. It is reminiscent of other scale-out protocols like PAXOS or Raft, but it seems to focus on an ability to scale to very large number of nodes without needing to know about the overall state of the network, just the status of a subset of nodes that are nearby. Scality have made their own extension and adaptations to Chord, and used it as part of the overall storage service.

Layered on top of this core functionality is some policy based data protection (replication and erasure coding based) and self-healing capabilities. The erasure coding implementation keeps the data chunks intact, adding parity data, rather than re-encoding the data as intermingled data+parity chunks. This speeds up reads, and means the relatively costly erasure coding calculations only need to be performed on rebuilds.

Accessing the storage is performed through an access layer (funny that) using Connectors. Scality has an object storage heritage, but the underlying object store also has a native scale out filesystem called SOFS that uses an internal distributed database called MESA to store the file metadata (inodes, directory hierarchy information,etc.). It’s not clear to me how SOFS/MESA and the Chord keyspace and distributed hash table interact, so there’s something I can ask during CFD1.

Scality uses an AWS S3 compatible API as well as its own native REST API for object access. AWS S3 is the de facto standard for object storage access now, so we should all just settle on the basic semantics of the protocol and move on. Scality also supports OpenStack SWIFT and Cinder, but also Glance and Manila somehow, which intrigues me. I’m not an OpenStack guru, so I’ll be interested to hear more about how Scality interacts with OpenStack in these different ways.

Scality also supports NFSv3, SMB 2.0, and Linux FUSE for filesystem access, which talks to SOFS. Scality claim this is an improvement over some competitors that use a gateway approach to filesystem access, but really the Connectors are a gateway to the underlying system. The gateway mechanism is just baked into the product, so yes it probably does provide some advantages, but again I’d like to know more.

It’s still pretty great that the one system can speak all of these different protocols. There’s no block access, but I’m ok with that, because LUNs are stupid and need to die.

RING v6 adds a bunch of enterprise features as well, such as Identity and Access Management through Active Directory integration, and Single Sign On via SAML assertions. These kinds of features make systems more attractive to enterprises because they can interoperate with the existing infrastructure, processes, and systems that an enterprise already has.

Newfangled startup things are fine for point solutions where you really need something new, but once you start extending into the rest of the organisation from your comfy toehold, you start needing to play well with others.

Scality are definitely one to watch, and I look forward to learning more about them.