Thorny technical questions remain for net neutrality

  
Thorny technical questions remain for net neutrality
Not all online traffic is the same; should we treat it the same anyway? Credit: shutterstock.com

Federal rules mandating network neutrality – the concept that all internet traffic should be treated equally – were upheld recently by the D.C. Circuit Court of Appeals. The decision was hailed as a win by civil-rights groups, entrepreneurs and tech giants like Google, as well as the Obama administration itself, which had proposed the rules in the first place. Under them, internet service provider companies are prevented from giving speed boosts (or delays) to traffic of certain types or from certain sites.

There is an important principle at stake: Treating all internet traffic the same protects innovation. Otherwise, new services seeking to compete with Google, Facebook and the like would be at a significant disadvantage, with not enough money to buy enough network bandwidth to properly showcase their innovations.

But not all internet traffic is the same. Despite this significant legal win for network neutrality, it remains unclear how it should work in practice. Getting the details wrong risks creating a system where customers don't get the best possible service, and society misses out on some potential innovation.

In fact, there are several scenarios in which I'd argue ISPs really should be able to treat different types of traffic unequally, speeding some along while slowing others down. Imagine a particular network link is congested, which is often the case with mobile data and network routing facilities where ISPs connect to each other's networks. Congestion typically happens when lots of wireless customers are in one place trying to connect to the internet, or when one ISP is sending more data than a recipient ISP can handle, as when Netflix customers' streaming maxed out Netflix's network links with Verizon in 2014.

Now consider two users whose internet traffic goes through the congested link. If one user is streaming video and another is backing up data to the cloud, should both of them have their data slowed down? Or would users' collective experience be best if those watching videos were given priority? That would mean slightly slowing down the data backup, freeing up bandwidth to minimize video delays and keep the picture quality high.


What is 'reasonable management' anyway?

The Open Internet Order, as the federal net neutrality rules are formally known, does anticipate this, to some degree. It allows ISPs to use "reasonable network management" practices to keep data flowing, without violating the overall purpose of the regulations. However, the Federal Communications Commission has not yet defined what that means – perhaps because doing so is difficult.

It makes sense that when a link is not fully occupied, no traffic should be delayed. Similar logic would suggest that if a link is overloaded, it might be useful to delay some data and prioritize others. But where do we draw the line between the two extremes?

And what if a link is mostly full, but not yet all the way? Could an ISP throttle back some delay-tolerant traffic (such as software update downloads) to leave room in case any new time-sensitive traffic (streaming videos, internet phone calls) came through? Or would it have to wait until the link was completely filled before stepping in to manage priority?

Next imagine that the only solution to handling a full link pits one user's Skype call against another user's Netflix stream. How should the ISP make the decision about the effects its selective traffic priorities will have on both users' experiences?

The FCC has not ruled on these technical details, but declaring what is reasonable will be important for consumers and ISPs alike. I and my colleagues are studying how to measure users' experience from internet traffic – rather than just quantifying the speed of data flows. One of our goals is to help ISPs understand the effects of various traffic-handling methods.

Finding ways around neutrality

Throttling and prioritization are not the only ways ISPs can improve or degrade the performance of internet traffic. ISPs can route internet traffic along a variety of network paths, which are not always as short, and therefore as fast, as they could be. A firm could, therefore, route traffic from one video streaming service (say, the one the ISP itself owns) via a network path with a large amount of bandwidth, while routing a competitor's traffic along a more circuitous path with limited bandwidth.

ISPs tend to consider information about their network layout and capacity, routing policies and traffic-handling settings as competitive secrets not subject to public scrutiny. That makes it very hard to tell from the outside if a particular routing decision is discriminatory or a legitimate network-management choice. The FCC's rules do allow government regulators to review some of this data, but only after a complaint has been made. Without the data, though, it's nearly impossible to document a pattern of discrimination that might warrant complaining in the first place.

All in all, while the spirit of equality underlying the federal government's drive for network neutrality is well-intentioned, a perfectly neutral network is not in the best interests of consumers. How the rules of an imperfectly neutral network are set up will determine if we can indeed have the internet serve as a utility that facilitates long-term innovation.

Explore further: The UK doesn't yet need net neutrality regulations