Approximate Storage

December 30, 2013

I recently had the privilege of presenting our latest paper on approximate computing at MICRO 2013 in Davis. This newest bit of work expands the scope of approximation beyond just computation itself—it follows the community’s recent work on trading off accuracy for efficiency in CPUs, GPUs, and other accelerators. We wanted to demonstrate that the approximation paradigm (if you’ll forgive the business-school word) is relevant in other components too: in particular, in storage systems.

Today’s memory and mass storage devices hold a lot of error-tolerant data. If you look at what’s filling up your smartphone’s flash memory, for example, it’s likely dominated by photos and music—stored in media formats that already trade off quality for size. On the opposite end of the computing spectrum, datacenter-scale machine learning systems aggregate huge amounts of fast memory but are resilient to occasional errors in the stored data.

Our project proposes approximate storage: an abstraction and a set of techniques that exploit these kinds of error-tolerant data—in both main memory and persistent, disk-like storage—to make memories better. In particular, we wanted to exploit the unique properties of phase-change memory (PCM), an upcoming replacement for disk, flash, and potentially DRAM, to address some of its drawbacks. While PCM promises to solve DRAM’s scaling woes and vastly outpace flash SSDs, it has two significant pitfalls: dense multi-level PCM is slow and power-hungry compared to DRAM, and the memory has a finite lifetime—it eventually wears out. Approximation can help address both problems. By allowing sloppier writes, we can make PCM faster and denser. And by recycling failed memory blocks that otherwise would need to be thrown away, we can make devices last longer even as parts of them begin to fail.

We simulated both of these strategies using a variety of approximate programs and error-resilient data sets. On average, we found that approximate writes to be 1.7x faster than precise writes and that failed-block recycling extends the useful device lifetime by 27%.

An Experiment and a Terrible Video

You can read our paper if you’re interested in the technical details. But if you’d prefer to hear me try to explain them, I’m trying something new this time.

Like all grad students, I spend a lot of time preparing conference talks. These talks are our only chance to demonstrate our honest, hopelessly nerdy excitement for the work we conduct otherwise mostly in solitude. As sentimental as it sounds, the conference setting really does make research feel vital—the right “pitchman” can make you pay attention to a great paper you’d otherwise pass over.

But in our community, conference talks are a one-time affair. Recording is not common practice, so the only option is to physically fly to conferences. Whenever I miss a conference, I wish for video. (Almost as good as actually attending, and much better coffee!)

In an endeavor to make this a reality, here’s my first conference talk video. I recorded it while the real thing was still fresh in my mind to simulate the nerve-obliterating conference experience. The audio quality is not great and my delivery is pretty terrible, but it’s a start. Let me know what you think—I’d like to make a routine out of this and I’m interested in feedback about how to make these videos as useful as possible.