Seattle (WA) – Ingenious solutions to problems often originate from very simple ideas. And, it appears, researchers from the University of Washington have found a way to substantially reduce Internet congestion for service providers that could ultimately result in Internet users being saved from much discussed bandwidth caps. The idea: Keep Internet traffic local.
Internet service providers have not made many friends by starting a discussion about possible bandwidth caps. And even if some of these providers aren’t exactly smart with their choice of words, there is no denying that some Internet users are using much more bandwidth than others. It is generally believed that file-sharing across peer-to-peer (P2P) networks can account for 50 – 80% of web traffic at any given time. No matter how you look at it, that massive growth of this traffic could turn into a problem for all Internet users sooner or later.
But, as it turns out, there might be a very simple solution to this problem – a solution that could dramatically reduce the load on critical Internet infrastructure. Researchers at the University of Washington and Yale University propose a “neighborly approach to file swapping, sharing preferentially with nearby computers.”
The research group found that one of the problems with file-sharing is that data packets travel enormous distances, taking advantage of key portions of the global Internet infrastructure. For the networks considered in the field tests, researchers calculated that the average peer-to-peer data packet currently travels 1000 miles and takes 5.5 metro-hops, which are connections through major hubs. Their “neighborhood network” which is focused on reducing the overall distance, those numbers came down to 160 miles on average and just 0.89 metro-hops – which means that the load on significant Internet arteries between cities.
The network technology, dubbed P4P to indicate a next-generate P2P network, resulted in a local file sharing share of 58%, compared to only 6% in the real world today. "Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW research assistant professor of computer science and engineering. "At the same time, speeds are increased by about 20%."
Overall, the researchers said that the “experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications.”
Internet service providers apparently are aware of the project: the research group said that a working group formed last year to explore P4P and now includes more than 80 members, including representatives from all the major U.S. Internet service providers and many companies that supply content. In order to be implemented into the current Internet infrastructure, the researchers said that the P4P system requires Internet service providers to provide a number that acts as a weighting factor for network routing. That means that a cooperation between the Internet service provider and the file-sharing host will be necessary. However, companies to disclose information about how they route Internet traffic, the researchers noted.
If P4P can keep its promise, it is without doubt a welcome technology and an idea that came up at the right time. But it may sound a bit surprising that we have heard so much about possible bandwidth caps in recent weeks, despite the fact that possible solutions to reduce overall Internet traffic are already appearing on the horizon. It clearly doesn’t create credibility for the major Internet service providers in the U.S., which have been very aggressive in taking bandwidth cap talks into the public.
P4P in fact looks like fantastic idea for now and it may buy Internet service providers some time, before a major network upgrade will be necessary – or bandwidth volumes will be capped.