615 Projects
The 615 final project should involve an investigation that could form
the basis of a submission to a conference in the fields of mobile
systems, communications or peer-to-peer systems.
I have provided some possible topics below. It's certainly possible
to investigate other topics, but you need to clear them with me by Oct 25.
You may work alone or in pairs. Groups working in pairs
are expected to undertake broader tasks and to perform more extensive
analyses than students working alone. No group may have more than one
Ph.D. student.
If you are running an experiment that requires network measurement
in the Internet at large, I am a member of the
Planet-Lab consortium, which
provides access to a few dozen geographically distributed nodes around
the globe. Consequently, you can collect network statistics from more
than one location in the Internet.
Ad Hoc Networking Topics
- On the scalability of ad hoc routing protocols. We
came up with a new simulation technique last year in 615 and built a
new simulator, called sns, based on ns2 that is very fast and scalable. Use sns
to investigate how ad hoc routing protocols behave in large-scale networks.
Develop techniques, such as localization and cluster decomposition, for
increasing protocol scalability. Demonstrate that your approach works
better than existing protocols.
Not taken yet
- Hybrid routing for ad hoc networks.
Investigate the relative efficiency and and adaptability of SHARP
vs. ZRP. Determine the circumstances under which one protocol
may be better than another. Doing this will invole implementing
ZRP and SHARP in sns (the scalable, much faster version of ns2).
Not taken yet
- Jamming. Quantify how bad targeted jamming attacks
can be on a wireless network, and design a new MAC-layer protocol
that can limit the impact of MAC-layer jamming. Note that radio
jamming techniques, such as blindly transmitting an interference
signal, will have a linear blowup - X seconds of transmission
during an attack will keep the targeted nodes off the air for X seconds.
In contrast, an attack targeted at the 802.11b MAC layer, which
performs exponential backoff, can keep nodes off the air for 2^X seconds
with X seconds of carefully orchestrated transmissions. The goal of this
project is to reduce this blowup factor.
Matthew Wachs
- User management for wireless access. Managing users on a
wireless network is quite difficult. Typically, membership in the
network is controlled by a global secret. Anyone with this secret,
e.g. "rover", can connect to the network. Further, for both technical
and non-technical reasons, these secret keys need to be short. So
short, in fact, that key recovery attacks are quite feasible. People
have demonstrated that the $4 billion dollars worth of deployed
802.11b equipment out there suffers from numerous security
vulnerabilities due to these recoverable keys.
To perform user management, and to thwart attacks against keys, we
need to do two things: (1) frequently change the low-level keys before
attackers have the opportunity to capture enough packets and reverse
the key, and (2) cleave out unwanted users from the network. Use a
two-tiered approach. At the lowest level, frequently change keys using
Lamport passwords. At a higher level, use a key broadcast technique to
cleave out unwanted users. Adopt Fiat and Naor's work, which targeted
always-on set-top boxes, to the wireless network domain, where users
may be offline for significant periods of time.
Alex L.
- Improving the Energy Efficiency of Commodity OSes. Commodity
operating systems are not designed to conserve energy. Consequently,
they are unsuitable for mobile devices. Take Linux, and make the
following changes to improve its energy efficiency (or else
evaluate potential benefits and describe required new hardware
to implement them):
- Replace the boot sequence with kernel checkpoint and recovery
- Replace periodic tasks, such as clock interrupts, with lazy
evaluation. Skip the interrupts and perform the soft clock update
from the realtime clock.
- Snapshot applications after each file close, record checksum of
the input files, reuse snapshot if the input files have not
changed.
- Perform aggressive device energy management. Turn off unused components,
spin down drives etc.
- Record and compress TLB trace history, repopulate TLB without
incurring misses. Build model to increase mean time between disk accesses,
which increases energy efficiency when coupled with aggressive spindown.
- Perform whole file recovery. Do not engage the filesystem for files
that can easily be reconstructed, e.g. object files, binaries, etc. Use RAM
file system to store the files.
Biswanath & Hitesh
- GPS-less positioning. Allow a large number of nodes within
a network to discover their position based on a very small number of
landmark nodes equipped with GPS transmitters. Set up a system of
geometric constraints based on which nodes are within transmission
range of each other. Solve this system (using bezier curve intersection,
or a bitmap representation of the space) independently at each node.
This work should vastly improve positioning certainity and total system
cost (which is dependent on the number of nodes that need to be equipped
with GPS receivers), compared to Estrin's work.
Saikat
Peer to Peer Systems
- Anonymous Communication. Implement a messaging and filesharing
application
on top of Herbivore, a provably
secure, scalable and efficient protocol for anonymous communication.
This protocol is based on
Herbivore.
Famer
- Beehive applications. Build a Beehive-based web cache.
Beehive is an optimal caching scheme that permits P2P overlays to perform
lookups in O(1) time, regardless of the size of the network. The Beehive paper that describes the operation of the system is on the class syllabus.
Such a cache will use the proactive caching strategy described in the
Beehive paper to implement a peer-to-peer cache of web pages. Such a
system will be resilient both to denial of service attacks and the
"slashdot effect." Deploy your Beehive web cache on planetlab, and measure
its performance against a traditional cache. If successful, this project
would eliminate the slashdot effect from the Internet. Slashdot would
link to websites through your cache, deployed on planetlab nodes.
Fawaz
- P2P Indexing and Search Engine. Search engines require
large amounts of bandwidth to index the web, large amounts of storage
(both disk and memory) to store the built index, and large amounts of
processing power to run the queries. The minimum hardware cost to
launch a search engine is, consequently, quite high. However, the
aggreage hosts that make up a P2P system provide large amounts of
these resources.
Investigate a distributed web crawler that uses a structured P2P
system to crawl the web. Construct a distributed index construction
and partitioning scheme such that only a portion of the index is kept
at each node. Route queries to the right set of clients to cover the
full index. Build and measure the performance of such a system.
Aseem
Bernard
Selcuk
- Proximity-aware P2P systems. Devise a scheme by which
nodes with higher capacity in a P2P system have more objects allocated
to them. Assign space around a ring in proportion to the capacity of
a node. Reassign node ids in the ring to reduce overlap and fragmentation
(incurring costs to transfer objects). Compare against randomly placing
nodes, and against using virtual nodes (which incur a high cost at the
routing layer because each large-capacity node now needs to act like
many small virtual nodes).
Not taken yet
- Distributed single sign-on. A single sign-on scheme is
attractive because it provides authenticated identities that can be
used across the Internet. However, users are wary of trusting a single
entity, like Microsoft, to perform authentication. Devise a scheme by
which users can be authenticated by a distributed authentication service,
whose servers operate in separate administration domains. Enable
systems on the network to specify policies such as "I will consider
a user authenticated if he has been vetted by either yahoo.com or
passport.com, and he has been checked by one of these five biometric
authentication services." Make this scalable and efficient by pushing as
much of the work to the clients as possible.
William