BU NSF06 kannan

43 %
57 %
Information about BU NSF06 kannan
Entertainment

Published on October 31, 2007

Author: Heather

Source: authorstream.com

Making dense sensor networks smarter using randomized in-network processing:  Making dense sensor networks smarter using randomized in-network processing Kannan Ramchandran UC Berkeley (Joint work with Alex Dimakis & Dragan Petrovic) Making dense sensor networks smarter using randomized in-network processing:  Making dense sensor networks smarter using randomized in-network processing Kannan Ramchandran UC Berkeley (Joint work with Alex Dimakis & Dragan Petrovic) Motivating vision:  Motivating vision Dense collection of tiny, dense devices Derive strength in numbers Philosophy: local actions  global properties Each node acts Randomly and independently Based on local knowledge Desired global behavior arises as a collective property Evolutionary success stories:  Evolutionary success stories 10-15% of terrestrial animal biomass 109 Neurons/”node” Since 105 years ago 10-15% of terrestrial animal biomass 105 Neurons/”node” Since 108 years ago Bert Hölldobler and Edward O. Wilson, Journey to the Ants, Harvard University Press, 1994. Goal of this talk:  Goal of this talk Study the scaling power of using a “distributed and randomized” mindset to two scenarios: Reliable communication with unreliable “untuned” radios Randomized network coding Large-scale networked storage of remotely generated data Decentralized erasure codes Goal of this talk:  Goal of this talk Study the scaling power of a distributed and randomized mindset to two application scenarios: Reliable communication with unreliable “untuned” radios Randomized network coding Large-scale networked storage of remotely generated data Decentralized erasure codes Motivation: drive for lower power, cheaper, smaller:  Motivation: drive for lower power, cheaper, smaller Power On the market: 10 mW Prototypes: 3-4 mW Energy scavenging can give 100 mW/cm2 Cost On the market: $50 each Prototypes at high volume: ~$5 Need $0.1 Size: small enough to “mix with paint” Problem: Digital subsystems still riding Moore’s Law: no such trend for radios (analog + off-chip crystal) Narrowband Radios:  Narrowband Radios Narrowband Radios Simple, used by most sensor nodes today How to get fcarrier? Crystal Oscillator (precise but expensive) MEMS Resonator (less precise & less expensive) On-chip LC-Resonator (cheap, low-power, imprecise) Untuned Radios:  Untuned Radios Admitting all possible freq. at Rx. would result in low SNR Narrow filter at Rx. results in channelization No guarantee that two nodes can select same channel Can a reliable communication network be made out of such unreliable radios? Yes, if there is enough density “Random” multi-hop communication:  “Random” multi-hop communication 1 2 3 … H Destination Source Tx’s in Block b choose channel randomly, independently Rx’s in Block b+1 choose a channel randomly, independently Channel Selection:  Channel Selection Tx’s send on a random channel Rx’s listen to a random channel 1 2 3 4 5 N … N-1 If there are N channels, throughput maximized with N transmitters choosing channels uniformly. Prob. (channel contains exactly 1 Tx.)  1/e System parameters:  System parameters 1 2 3 … H Destination Source N nodes per block H is the number of stages (hops) L receivers and 1 transmitter per node Nodes may hear more than one transmitter May forward one of the packets Or may randomly mix them using linear network coding Achievability of Maximum Flow:  Achievability of Maximum Flow T. Ho, M. Medard, R. Koetter, J. Shi, M. Effros, and D. Krager (2003) Max-flow achieving NC exists in GF(q) for q > # receivers Random NC achieves max-flow w.h.p. as q grows sender receivers What is the max-flow? Slide courtesy of Phil Chou (MSR) Random Graph Representation: what is the max flow?:  Random Graph Representation: what is the max flow? Each vertex connects to L vertices in previous column L untuned radios 1 2 N 1 2 3 H Each vertex deleted with probability 1-1/e Collisions with probability 1-1/e Finding Max-Flow:  Finding Max-Flow Probability that r end-to-end paths exist, when vertices deleted w/ prob. p2 1 2 N 1 2 3 H can be related to prob. a single path exists when vertices deleted w/ prob. p1 with 0< p2< p1<1 End to End Path:  End to End Path Number of nodes in each col. with connectivity to col. 1 forms Маrkov chain Probability the chain dips below particular threshold in H steps can be shown to be exponentially small in N 1 2 N 1 2 3 H Result 1:  Result 1 For any constant b, such that b<1/e, there exists a constant L s.t. the max-flow of the graph > (b- log[H]/N)N w.h.p. as N gets large. 1 2 N 1 2 3 H(N) Throughput approaches (1/e)N for const. L as long as H is subexp(N). (Petrović, Rabaey & R 2005) Result 2:  Result 2 Routing gives constant throughput (does not grow with N) for H linear in N. Punchline: Network Coding is necessary: is “infinitely” better than “store-and-forward” in this case! Simulation:  Simulation A simulation environment was created in order to verify and extend the theoretical results: In figure: H = 3, L = 10, N = 100 Luca De Nardis INFO-COM Department University of Rome La Sapienza lucadn@eecs.berkeley.edu Simulation :  Simulation Simulations will allow to analyze real-world aspects such as: Interference modeling Impact of positioning errors Impact of different Medium Access Control and routing on network performance We can build the end-to-end graph and evaluate the max-flow for each simulated packet wave In figure: H = 3, L = 10, N = 100 Luca De Nardis INFO-COM Department University of Rome La Sapienza lucadn@eecs.berkeley.edu Simulation:  Simulation The max-flow of the network is mainly determined by the first few stages The max-flow is thereafter maintained L = 10, N = 100 Luca De Nardis INFO-COM Department University of Rome La Sapienza lucadn@eecs.berkeley.edu Goal of this talk:  Goal of this talk Illustrate the scaling power of a distributed and randomized mindset to two application scenarios: Reliable communication with unreliable “untuned” radios Randomized network coding Reliable large-scale networked storage of remotely generated data using nodes having limited memory Decentralized erasure codes Problem description - Motivation:  Problem description - Motivation Traditional approach: Pull data Find where data is Find routes to data Introduces latency, unreliability Ubiquitous access to data :  Ubiquitous access to data Pre-route Make data ubiquitous, network ready for any query, at any location Data available everywhere Robustness Low latency Problem statement:  Problem statement n Storage nodes, k Data nodes, k=α∙n data nodes (α<1, fixed) Want to reconstruct original data packets from any k storage nodes Nodes can store only one packet. Centralized solution:  Centralized solution Gather data in central node Central node creates an efficient (MDS) erasure code (LT code , Reed Solomon) Central node routes encoded packets to all storage nodes Each storage node stores one erasure packet Any k storage nodes have all the data Not scalable How to build a code if your data is not in one place? Decentralized Erasure Codes (Example) :  Decentralized Erasure Codes (Example) Each node stores a linear combination in a finite field. k random storing nodes are queried. Want these k equations to have unique solution. Result: This is feasible with random, independent pre-routing. Communication cost: Number of pre-routed packets (Data node degree) B C A D A D B+C A+C Decentralized Erasure Codes (Example) :  Decentralized Erasure Codes (Example) Each node stores a linear combination in a finite field. k random storing nodes are queried. Want these k equations to have unique solution. Result: This is feasible with random, independent pre-routing. Communication cost: Number of pre-routed packets (Data node degree) B C A D A D B+C A+C Random Target Routing: find an (almost) random node:  Node picks a random location (=“target”) Greedy routing towards the target Probability to receive ~ Voronoi cell area Random target routing works on Random Geometric Graphs Random Target Routing: find an (almost) random node Random Target Routing:  Node picks a random location (=“target”) Greedy routing towards the target Probability to receive ~ Voronoi cell area Random target routing works on Random Geometric Graphs Random Target Routing Random Geometric Graphs:  Random Geometric Graphs Depends on graph and the transition probabilities Realistic sensor network model (Gupta & Kumar): Random Geometric Graph G(n,r): n random points, connect if distance<r r Random Target Routing: find an (almost) random node:  Lemma: if Then by random routing on G(n, r) the following are true with high probability: Random routing will transport the packet to the node closest to the random target The number of hops will be: Probability for each node to receive proportional to its Voronoi cell area Random Target Routing: find an (almost) random node Decentralized Erasure Codes :  Decentralized Erasure Codes Data nodes pre-route packets to randomly selected storage nodes. Storage nodes select random coefficients fi from a field Fq Store linear combination and coefficients k data nodes n storing nodes X1 X2 X3 f1 f2 f3 f4 f5 f1 X1+f2 X2 f3 X2 f4 X1+f5 X3 f6 f6 X3 Decentralized Erasure Codes :  Decentralized Erasure Codes k data nodes n storing nodes X1 X2 X3 f1 f2 f3 f4 f5 f1 X1+f2 X2 f3 X2 f4 X1+f5 X3 f6 f6 X3 The encoded vector (what is stored) is Y=X G. Each row of G is generated independently =Decentralized Want G as sparse as possible. Now assume only storing nodes 1-3 are queried. To reconstruct it suffices to have G’ to be full rank Main results:  Main results Communication Cost: Data node degree Theorem: Degree O(log n) is sufficient. (whp) Theorem: Ω(log n) is necessary for independent data nodes. G’ full rank with probability at least (1-k/q), k : number of data nodes, q: size of finite field=2u . Error probability decreasing exponentially in the number of codeword bits u. ( storage overhead) X2 X4 X3 X1 (Dimakis, Prabhakaran & R 2005) Proof ideas:  Proof ideas We want G’ invertible ( det(G’) not zero). det(G’) is a multivariate polynomial of random coefficients f1, f2, .. fL. Two cases: Case I det(G’) could be identically zero (for any fi) e.g. if one column is all zeros (e.g. when one storing node receives nothing) Case II It could just be that f1, f2, .. fL is a root of the determinant polynomial Proof ideas (Case I: hard case) :  Proof ideas (Case I: hard case) Use Edmonds’ Theorem: The determinant is not identically zero iff the random bipartite graph has a perfect matching We show that random bipartite graphs we introduced have a perfect matching (whp). Proof uses modification of technique introduced by Erdös for similar problem (Each bipartite graph with 2k nodes and k log(k) edges assigned randomly has a perfect matching whp) Result might be of independent interest. Proof ideas (Case II: easy case) :  Proof ideas (Case II: easy case) Case 2) Maybe f1, f2, .. fL is a root of the determinant polynomial Intuition: very unlikely to hit a root with a random assignment. Formally: Schwartz Zippel theorem: If Q(f1 ,f2 , …,fL) multivariate polynomial of degree d, and if fi chosen iid from a finite field F(q) , then P[Q(f1,f2, …,fL)=0| Q≡0]≤ d/q Easy to see d=k in the determinant polynomial Lower cost by allowing for slack?:  Lower cost by allowing for slack? Exact data collection: recover k packets by asking k storage nodes, communication cost is O(log n) Approximate data collection: what if we want to recover only (1-δ) k data packets by asking any (1+ε) k storage nodes (for constants δ,ε>0)? Result: Can be done with constant degree routing and belief propagation decoding (Distributed fountain code, ICASSP’06) for lattice graphs. X2 X4 X3 X1 (Dimakis Prabhakaran & R, 2006) Conclusion:  Conclusion Explored the feasibility of pushing the envelope of SN w.r.t. tiny, cheap, low power and unreliable nodes Examples where unreliability is overcome w/ density Distr. storage: info. is uniformly diffused through network (Dimakis, Prabhakaran, Ramchandran, “Decentralized Erasure Codes for Distributed Networked Storage,” Special issue of Trans. On IT: Theory: Networking and IT). Untuned radios still give high communication throughput with density and simple random linear mixing (Petrovic, Rabaey, Ramchandran, “Untuned radios in wireless sensor networks with network coding,” Special issue of IEEE Transactions on IT: Networking and IT). Several open questions: general networks and traffic patterns, dynamics, hierarchical extensions, correlated data, complexity, value of some co-ordination, cross-layer optimization, validation/deployment challenges….

Add a comment

Related presentations

Related pages

rutas de putaendo - slidesearch.org

rutas de putaendo Entertainment presentation. ... Published on October 24, 2007. Author: Heather. Source: authorstream.com
Read more

Obesity - slidesearch.org

Obesity Science-Technology presentation. ... Published on May 7, 2008. Author: Heather. Source: authorstream.com
Read more