US broadband speeds aren’t nearly as bad as they’re painted, says MIT, thanks to a series of mis-estimates by a Federal Communications Commission survey.
The FCC’s National Broadband Plan, released in March, reported that most American households were getting only 40 to 50 percent of the advertised ‘up to’ speeds they were being quoted.
But a new MIT study indicates that most methods for measuring internet data rates, including those used by the FCC, underestimate the speed of the access network.
The number of devices accessing a home wireless network, the internal settings of a home computer and the location of the test servers can all affect measurements of broadband speed.
“If you are doing measurements, and you want to look at data to support whatever your policy position is, these are the things that you need to be careful of,” says Steve Bauer, technical lead on the MIT Internet Traffic Analysis Study (MITAS). “For me, the point of the paper is to improve the understanding of the data that’s informing those processes.”
The researchers analyzed a half-dozen different systems for measuring the speed of internet connections, from free applications to commercial software licensed by ISPs.
In each case, the underestimation of the access networks’ speed had a different cause.
The FCC study, for instance, analyzed data for broadband subscribers with different tiers of service. But the analysts didn’t know which data corresponded to which tier of service, so they assumed that the subscription tier could be inferred from the maximum measured rate.
In fact, though, the subscribers in lower tiers sometimes ended up getting higher data rates than they’d paid for. In the study cited by the FCC, therefore, exceptionally good service for a low tier may have been misclassified as exceptionally bad service for a higher tier.
In other tests, inaccurate measurements were the result of an idiosyncrasy of the Transmission Control Protocol (TCP). With TCP, the receiving computer indicates how much data it can accept at any point in time; and for some common operating systems, the default setting is simply too low.
In real life, many applications get around this by opening multiple TCP connections at once. But if an internet speed test is designed to open only one TCP connection, data rates end up looking artificially low.
In other cases, overloaded servers in the test system redirect requests to other, more distant servers, again slowing data rates.
The results point to the difficulty of using a single data rate to characterize a broadband network’s performance.
“If you’re watching lots of movies, you’re concerned about how much data you can transfer in a month and that your connection goes fast enough to keep up with the movie for a couple hours,” says economist William Lehr.
“If you’re playing a game, you care about transferring small amounts of traffic very quickly. Those two kinds of users need different ways of measuring a network.”
The researchers have submitted their report to both the FCC and the Federal Trade Commission.
“This report from Dave, Steve Bauer, and Bill Lehr is the first comparative study that I’ve seen,” says FCC spokesman Walter Johnson. “What we’re doing right now is a follow-up to the broadband plan, recognizing that we need better data.”
The FCC is currently in the early stages of a new study that will measure broadband speeds in 10,000 homes, using dedicated hardware that bypasses problems like TCP settings or the limited capacity of home wireless networks.