It is a simple enough trick, butit simply alternates random node data transfer with data transfer to an as yetunheard from node. Inevitably bottlenecknodes get the necessary special attention and the overall network willnaturally speed up.
I am surprised this has not beenthought of before since the problem is obvious and obvious overcome byaddressing it more often. It has likelybeen invented many times already.
Breaking bottlenecks
A new algorithm enables much faster dissemination of informationthrough self-organizing networks with a few scattered choke points.
Larry Hardesty, MIT News Office
September 24, 2010
A new algorithm spreads information (red) much more efficiently innetworks characterized by sparse connections between densely interlinkedclusters.
Graphic: Christine Daniloff
January 11, 2011
As sensors that do things like detect touch and motion in cell phonesget smaller, cheaper and more reliable, computer manufacturers are beginning totake seriously the decade-old idea of “smart dust” — networks of tiny wirelessdevices that permeate the environment, monitoring everything from thestructural integrity of buildings and bridges to the activity of livevolcanoes. In order for such networks to make collective decisions, however —to, say, recognize that a volcano is getting restless — they need to integrateinformation gathered by hundreds or thousands of devices.
But networks of cheap sensors scattered in punishing and protean environmentsare prone to “bottlenecks,” regions of sparse connectivity that all transmitteddata must pass through in order to reach the whole network. At the 2011ACM-SIAM Symposium on Discrete Algorithms, which took place in New Orleans theweekend of Jan. 9, Keren Censor-Hillel, a postdoc at MIT’s Computer Science andArtificial Intelligence Laboratory, and Hadas Shachnai of Technion – IsraelInstitute of Technology presented a new algorithm that handles bottlenecks muchmore effectively than its predecessors.
The algorithm is designed to work in so-called ad hoc networks, in which no onedevice acts as superintendent, overseeing the network as a whole. In a networkof cheap wireless sensors, for instance, any given device could fail: itsbattery could die; its signal could be obstructed; it could even be carried offby a foraging animal. The network has to be able to adjust to any device’sdisappearance, which means that no one device can have too much responsibility.
Without a superintendent, the network has no idea where its bottlenecks are.But that doesn’t matter to Censor-Hillel and Shachnai’s algorithm. “It nevergets to identify the bottlenecks,” Censor-Hillel says. “It just copes withtheir existence.”
Consistent inconsistency
The researchers’ analysis of their algorithm makes a few simplifyingassumptions that are standard in the field. One is that communication betweennetworked devices takes place in rounds. Each round, a device can initiatecommunication with only one other device, but it can exchange an unlimitedamount of information with that device and with any devices that contact it.During each exchange, it passes along all the information it’s received fromany other devices. If the devices are volcano sensors, that information couldbe, say, each device’s most recent measurement of seismic activity in its area.
It turns out that if you’re a sensor in a network with high connectivity — onein which any device can communicate directly with many of the others — simplyselecting a neighboring device at random each round and sending it all theinformation you have makes it likely that every device’s information willpermeate the whole network. But take two such highly connected networks andconnect them to each other with only one link — a bottleneck — and therandom-neighbor algorithm no longer works well. On either side of thebottleneck, it could take a long time for information to work its way around tothe one device that communicates with the other side, and then a long time forthat device to bother to send its information across the bottleneck.
Censor-Hillel and Shachnai’s algorithm works by alternating communicationstrategies from round to round. In the first round, you select a neighboringdevice at random and send it all the information you have — which, since it’sthe first round, is limited to the measurement that you yourself haveperformed. That same round, however, other devices may contact you and send youtheir information. In the second round, you don’t just select a neighbor atrandom; you select a neighbor whose information you have not yet received. Inthe third round, you again select a neighbor at random. By the end of thatround, since every device on the network forwards all the information it has,you’ve received not only the measurements performed by the devices youcontacted, nor just the measurements performed by the devices that contactedyou, but measurements performed by neighbors of your neighbors, and evenneighbors of neighbors of neighbors. In the fourth round, you again select adevice whose information you haven’t received; in the fifth, you select adevice at random; and so on.
“The idea is that the randomized steps I take allow me to spread theinformation fast within my well-connected subset,” says Censor-Hillel. But inthe alternate rounds, each device tracks down the devices it hasn’t heard from,ensuring that information will quickly reach all the devices, including thosethat communicate across the bottleneck.
According to Alessandro Panconesi, a professor of computer science at
Censor-Hillel agrees that “a major thing for future work would be to actuallyget practical bandwidth.” But for the time being, she’s collaborating withassistant professor of applied mathematics Jonathan Kelner and with CSAIL gradstudents Bernhard Haeupler and Petar Maymounkov to develop an algorithm thatperforms even better in the idealized case of unlimited bandwidth. “It’s amajor improvement,” she says.
No comments:
Post a Comment