Networking: Four ways to reinvent the Internet
The Internet is struggling to keep up with the ever-increasing demands placed on it. Katharine Gammon looks at ways to fix it.
The Internet is feeling the strain. Developed in the 1970s and 1980s for a community of a few thousand researchers, most of whom knew and trusted one another, the Internet has now become a crucial worldwide infrastructure that connects nearly two billion people, roughly a quarter of humanity. It offers up something like a trillion web pages, and transports roughly 10 billion gigabytes of data a month — a figure that is expected to quadruple by 2012. Moreover, those two billion users are exploiting the network in ways that its creators only dimly imagined, with applications ranging from e-commerce to cloud computing, streaming audio and video, ubiquitous mobile devices and the transport of massive scientific data sets from facilities such as the Large Hadron Collider, the world's highest-energy particle accelerator, based near Geneva, Switzerland.To some extent, this rapidly rising flood of information has been dealt with by updating the software and expanding the size of the data pipes — a development that most Internet users experience through the proliferation of 'broadband' services provided through cable television connections, digital subscriber lines and wireless hot spots. Yet users continue to be plagued by data congestion, slowdowns and outages, especially in wireless networks. And, as dramatized in January when search-engine giant Google publicly protested against digital assaults coming from somewhere in China, everyone on the Internet is vulnerable to cyberattack by increasingly sophisticated hackers who are almost impossible to trace — security having been an afterthought in the Internet's original design.
The result has been a rising sense of urgency within the networking research community — a conviction that the decades-old Internet architecture is reaching the limits of its admittedly remarkable ability to adapt and needs a fundamental overhaul. Since 2006, for example, the Future Internet Design (FIND) programme run by the US National Science Foundation (NSF) has funded researchers trying to develop wholesale redesigns of the Internet. And since October 2008, the NSF has operated the Global Environment for Network Innovations (GENI): a dedicated, national fibre-optic network that researchers can use to test their creations in a realistic setting. Similar efforts are under way in Europe, where the Future Internet Research and Experimentation (FIRE) initiative is being funded through the European Union's Seventh Framework research programme; and in Japan, where in 2008 the National Institute of Information and Communications Technology launched JGN2plus, the latest iteration of its Japan Gigabit Network system.
Buoyed by these funding initiatives, researchers have been testing out a plethora of ideas for reinventing the Internet. It is still too early to know which will pan out. But the following four case studies give a sense of both the possibilities and the challenges.
Make the pipes adaptable
The problem with the bigger-and-bigger-data-pipe approach to dealing with the Internet's growth is that it perpetuates a certain dumbness in the system, says electrical engineer Keren Bergman of Columbia University in New York. Right now, there is no way for a user to say: "This ultrahigh-resolution video conference I'm in is really important, so I need to send the data with the least delay and highest bandwidth possible", or "I'm just doing routine e-mail and web surfing at the moment, so feel free to prioritize other data". The network treats every bit of data the same. There is also no way for the Internet to minimize redundancy. If 1,000 people are logged into a massively multiplayer role-playing game such as World of Warcraft, the network has to provide 1,000 individual data streams, even though most are close to identical.
The result is a lot of wasted capacity, says Bergman, not to mention a lot of wasted money for users who have to pay extra for high-capacity data connections that they will need only occasionally. If the Internet could just adapt intelligently to what its users are trying to do, she says, it could run much more data though the pipes than it does now, thereby giving users much more capacity at a lower cost.
This is easier said than done, however, because the dumbness is deliberate. In an effort to simplify the engineering, Bergman explains, the architecture of the Internet is carefully segregated into 'layers' that take one another for granted. This means that application programmers, for example, don't have to worry about physical data connections when they are developing new software for streaming video or online data processing; they can just assume that the bits will flow. Likewise, engineers working on the physical connections can ignore what the applications are doing. And neither has to worry about in-between layers such as TCP/IP (Transfer Control Protocol/Internet Protocol): the fundamental Internet software that governs how digital messages are broken up into 'packets', routed to their destination, then reassembled.
But this clean separation also stops the layers from communicating with one another, says Bergman, which is exactly what they need to do if the data flow is to be managed intelligently. Working out how to create such a 'cross-layer' networking architecture is therefore one of the central goals of Bergman's Lightwave Research Laboratory at Columbia. The idea is to provide feedbacks between the physical data connection and the higher-level routing and applications layers, then to use those feedbacks to help the layers adjust to one another and optimize the network's performance.
This kind of adaptability is not new in networking, says Bergman, but it has been difficult to implement for the fibre-optic cables that are carrying more and more of the Internet's traffic. Unlike standard silicon electronics, optical data circuits are not easily programmable. As a result, many of the dozen projects now under way in her lab aim to integrate optics with programmable electronic systems.
Bergman's lab is also a key member of the NSF-funded Center for Integrated Access Networks, a nine-university consortium headquartered at the University of Arizona in Tucson. Her group's efforts have helped to drive many of the technology development projects at the centre, which hopes to ultimately deliver data to users at rates that approach 10 gigabits a second, roughly 1,000 times faster than the average household broadband connection today. "The challenges are to deliver that information at a reasonable cost in terms of money and power," says Bergman.
Control the congestion
Meanwhile, however, some researchers are taking issue with TCP itself. Any time a data packet fails to reach its destination, explains Steven Low, professor of computer science and electrical engineering at the California Institute of Technology (Caltech) in Pasadena, TCP assumes that the culprit is congestion at one of the router devices that switch packets from one data line to another. So it orders the source computer to slow down and let the backlog clear. And generally, says Low, TCP is right: whenever too many data try to crowd through such an intersection, the result is a digital traffic jam, followed by a sudden spike in the number of packets that get garbled, delayed or lost. Moreover, the tsunami of data now pouring through the Internet means that congestion can crop up anywhere, whether the routers are switching packets between high-capacity fibre-optic land lines carrying data across a continent, or funnelling them down a copper telephone wire to a user's house.
But more and more often, says Low, simple congestion is not the reason for lost packets, especially when it comes to smart phones, laptop computers and other mobile devices. These devices rely on wireless signals, which are subject to interference from hills, buildings and the like, and have to transfer their connection from one wireless hub to the next as the devices move around. They offer many opportunities for things to go wrong in ways that won't be helped by slowing down the source — a practice that just bogs down the network unnecessarily.
Researchers have been exploring several more-flexible ways to transmit data, says Low. One of these is FAST TCP, which he and his Caltech colleagues have been developing over the past decade, and which is now being deployed by the start-up company FastSoft in Pasadena. FAST TCP bases its decisions on the delay faced by packets as they go through the network. If the average delay is high, congestion is probably the cause, and FAST TCP reduces speed as usual. But if the delay is not high, FAST TCP assumes that something else is the problem and sends packets along even faster, helping to keep the network's overall transmission rate high.
To test his FAST TCP algorithms, Low's lab teamed up with Caltech's high-energy physics community, which needed to transmit huge files to researchers in 30 countries on a daily basis. From 2003 to 2006, the team broke Internet world network speed records at the International Supercomputing Conference's annual Bandwidth Challenge, which is carried out on the ultrahigh-speed US research networks Internet2 and National LambdaRail. In the 2006 event, they demonstrated a sustained speed of 100 gigabits per second, and a peak transfer speed of 131 gigabits per second — records that have not been substantially bettered by subsequent winners of the challenge.
Integrate social-networking concepts
What's great about the Internet, says computer scientist Felix Wu of the University of California, Davis, is that anyone with an address on the network can contact anyone else who has one. But that's also what's terrible about it. "Global connectivity means you have no way to prevent large-scale attacks," he says, citing as an example recent digital assaults that have temporarily shut down popular sites such as Twitter. "At the same time you are getting convenience, you are actually giving people the power to do damage."
In 2008, for example, security software maker Symantec in Mountain View, California, detected 1.6 million new threats from computer viruses and other malicious software — more than double the 600,000 or so threats detected the previous year — and experts say that these attacks will only get more common and more sophisticated in the future.
What particularly drew Wu's attention a few years ago was the problem of unsolicited junk e-mail, or 'spam', which accounts for an estimated 90–95% of all e-mails sent. What makes spam trivial to broadcast and hard to filter out, Wu reasoned, is the Internet's anonymity: the network has no social context for either the message or the sender. Compare that with ordinary life, where people generally know the individuals they are communicating with, or have some sort of connection through a friend. If the network could somehow be made aware of such social links, Wu thought, it might provide a new and powerful defence against spam and other cyberattacks.
With funding from the NSF, Wu has created a test bed for such ideas, called Davis Social Links. The test bed has a messaging system that routes packets between users on the basis of the lists of friends that each person creates in social networking sites such as Facebook. This gives test-bed users the option of accepting only the messages that reach them through the paths or groups they trust, making it more difficult for them to be reached by spammers or attackers who lack the proper trusted paths.
These social relationships in the system don't have to be restricted to people, Wu notes. Websites are fair game too. Users of Davis Social Links can build social relationships with YouTube, for example. A search engine based on this social-network idea might pick up two sites that claim to be YouTube, one that is real and one that is cloned to look like the video site. The system would try one and if it didn't have the expected connections to other trusted contacts, the path would be designated as untrustworthy and the site dropped. "In today's routing you only give the IP address to the service provider, they do the rest," says Wu. "In social routing I don't have a unique identity. I have a social identity that supports different routing."
Davis Social Links is part of the GENI test bed and will soon start testing with up to 10 million network nodes. But even if this approach turns out not to be viable, says Wu, more types of social research need to be integrated into the future Internet. "We need to mimic real human communication," he says.
Break from reality
Computer scientist Jonathan Turner of Washington University in Saint Louis, Missouri, says that the basic packet-delivery service hasn't changed in more than 20 years not because no one has a better idea, but because new ideas can't get a foothold. "It's increasingly difficult for the public Internet to make progress," he says. The network's infrastructure is fragmented among many thousands of network providers who are committed to the Internet as it is, and who have little motivation to cooperate on making fundamental improvements.
This spectre of a rapidly ossifying Internet has made Turner a champion of data channels known as virtual networks. In such a network the bits of data flow through real, physical links. But software makes it seem as though they are flowing along totally different, fictitious pathways, guided by whatever rules the users desire.
In present-day commercial virtual networks, those rules are just the standard network protocols, says Turner. But it is possible to create virtual networks that work according to totally new Internet protocols, he says, making them ideal laboratories for researchers to experiment with alternatives to the current standards. His group, for instance, is working on virtual networks that enable classes of applications that are not well-served by the current Internet. "This includes person-to-person voice communication, person-to-person video, fast-paced multi-player games and high-quality virtual world applications," he says. "In general, any application where the quality of the user experience is dependent on non-stop data delivery and there is low tolerance for delay."
Moreover, Turner is just one of many researchers pursuing this approach. An academic–industry–government consortium known as PlanetLab has been providing experimental virtual networks on a worldwide collection of computers since 2002. The GENI test bed is essentially a collection of virtual networks, all of which run atop Internet2 and National LambdaRail. This allows the same physical infrastructure to handle multiple experiments simultaneously — including many of the experiments mentioned in this article.
Looking farther down the road, says Turner, as the best of these non-standard, experimental protocols mature to the point of being ready for general use, virtual networks could become the mechanism for deploying them. They could simply be built on top of the Internet's existing, physical infrastructure, and users could start using the new functionality one by one. Different protocols could compete in the open marketplace, says Turner — and the era of the ossified Internet would give way to the era of the continuously reinvented Internet.
No comments:
Post a Comment