Making Bandwidth Work With Riverbed

A network switch with cables connected, and green status lights illuminated.

Riverbed are well known for making WAN accelerator devices, pretty much pioneering the idea. Since the pandemic hit, there’s been a massive surge in people working remotely over network infrastructure that wasn’t designed with this level of load in mind, so network teams have been scrambling to reconfigure their networks to deal with the load. It’s a boom time for Riverbed.

If you’ve not encountered WAN acceleration before, the core of the idea is that a lot of the data sent over TCP/IP networks is redundant and not really needed most of the time. By dropping a smart piece of software into the network path (usually running on a dedicated hardware device, the WAN acceleration appliance) you can intercept traffic and strip out the redundant parts, sending only the critical information.

This helps a lot for certain kinds of typical office network traffic, like CIFS and NFS, which (depending on the version and how it’s set up) can be very ‘chatty’; the endpoints send a lot of data to each other all the time.

This may not be as big a deal in the office itself with big fat 1 and 10 gigabit-per-second links to every desk, but when you’re trying to connect offices over expensive megabit or gigabit WAN connections, providing enough bandwidth for everything can get very expensive, or you end up with congested networks. Dropping a WAN accelerator at each end of your main inter-office WAN links was regularly worth it for the savings on renting the WAN bandwidth from the telco, in my experience (though that was some years ago now).

The problem of constrained bandwidth is only magnified when you have more complex network topologies, like getting people working at home over their local ISP’s 100Mb/s link (or less!) coming in through some sort of VPN termination service, and then connecting to their regular network services. Riverbed now have a bunch of options for monitoring your network to help troubleshoot problems, and explained a typical scenario for us during TFD22.

So many applications now run essentially as software-as-a-service, even when self-hosted in a corporate data-centre, that the network is incredibly vital for people to be able to get things done. Sun was right: the network is the computer.

But not all networks are created equal.

Bandwidth Inequality

Back when computing infrastructure was breathtakingly expensive, a lot of effort went in to optimising resource use to get the most out of what was available. As infrastructure has gotten cheaper, and software abstractions have accumulated, software has gotten lazier and more resource hungry. Dan Luu found that modern computer interfaces have worse latency than computers from the 1970s.

When software runs as-a-service over a network, as many modern applications do, this problem is magnified. It’s made worse by developers who have expensive high-end workstations writing apps that run in a cloud system they access from 4 miles away over 100Gigabit fibre networks because they all live in Silicon Valley. Their experience is nothing like that of a person with a 6 year-old corporate issued laptop with 4 GB of RAM and a 20MB/s DSL connection from their house, let alone someone on a hand-me-down iPhone 4 in a remote community.

It’s relatively easy to throw hardware at the problem when the problem is inside your corporate network and it’s cheaper to buy everyone new laptops and upgrade the network than pay a team of 12 expensive software developers to stop shipping new features and focus on optimising the code for a year.

But when you suddenly introduce systems you can’t control into the mix, like the lack of ISP choices for where people live when they’re trying to work from home during a pandemic, the lack of optimisation can really hit hard. You end up in a tricky situation with few good solutions that can be implemented quickly, and no time to do anything but move with haste.

There’s a real risk that a lot of the ‘solutions’—workarounds, bodges, patches, and hacks—that people have put in place, and continue to use, will just mask the problems with complex, brittle systems that make everything worse in the long run.

So consider slowing down a bit and finding a solution that might not be quite as fast but will be more sustainable. Take the time to optimise things once you’ve gotten it to work. Throw away your prototype and rebuild properly, using what you learned.

Because the best acceleration is not needing any at all.

Bookmark the permalink.

One Comment

  1. Pingback: Making Bandwidth Work With Riverbed - Tech Field Day

Comments are closed