We held WAX, the workshop on approximate computing, at ASPLOS last week. I love organizing WAX—it’s a great excuse for the approximation community to talk about the broader themes that extend beyond any single person’s research project du jour.
Here are some notes on those themes. You can also check out the archived program for links to papers and slides.
Disgruntled Introductions
To introduce ourselves, we all said something we like about approximate computing and something we don’t like. Predictably, this invited a healthy dose of griping. Two gripey themes emerged:
- There’s some sadness that approximation hasn’t “hit it big” yet, commercially speaking. We’re a half decade or so into the approximate-computing craze, so it feels to many like we should see shipping hardware soon.
- Our terminology is confusing. What does quality mean versus accuracy versus quality of service? A cohort even complained about approximate computing itself being more off-putting than alternatives like inexact or good-enough computing.
Cross-Stack Keynotes
We had two awesome keynote speakers, both of whom brought broad, interdisciplinary views on approximation.
Matthai Philipose from MSR has a goal of continuous mobile vision: always-on CV on a wearable device with all-day battery life and reasonable cloud costs. His data suggests that approximation is critical—not just a luxury—for this setting: current vision techniques can’t fit in the necessary energy and dollar budgets. It’s not even close. So he’s on a campaign to introduce approximation everywhere, from the camera sensor hardware to the DNN models and algorithms.
Naveen Verma from Princeton is a hardware researcher but, unlike some architects, believes approximate computing should come from the top down, from algorithms. He showed off data-driven hardware resilience, where you train a machine learning model to counteract the effects of deterministic hardware approximation. Under the right conditions, this cross-stack approach can lead to extremely good tolerance—much more than algorithm-agnostic approximation.
Best Practices
The discussion at the end of the day coalesced around standards of rigor in approximate-computing research. There was a broad consensus that evaluation methodologies have not improved enough since those heady days of the first few approximation papers. We hatched the idea of putting together a best practices document for approximation research, covering:
- Standard benchmarks with standard quality metrics and standard thresholds. When people are free to define their own quality metrics, there’s no way to compare two papers and no way to trust that an approximation is actually useful. The de facto 10% quality loss standard is my least favorite legacy of the EnerJ paper.
- A map of the available approximation techniques. If you want to apply approximation to a bottleneck in your favorite application, where should you start? This kind of guide is common for traditional performance optimization, so we should have one too.
- Agreement on what kinds of quality guarantees are worth striving for. Where do statistical guarantees make sense, and where do we need more traditional deterministic bounds?
I’m excited about this idea for a community-sanctioned set of standards. But it’s going to be difficult: work like this doesn’t fit with normal incentives for academics.
A Better Workshop Next Year
I have plenty to learn about organizing workshops. Here are some things we need to fix:
- One-minute lightning talks are a staple at WAX, but they need work. They’re supposed to be an effortless and fun way to provoke discussion, but people recently have put too much work into them—and a high standard means a low turnout. Even so, I heard reactions that one minute isn’t enough to communicate a whole idea. We need new ideas to keep the lightning round’s lighthearted spirit while making it more useful.
- Several people told me that the free-form, small-group lunch discussions were their favorite part of WAX. This was doubly true for new people and outsiders, who got to ask questions that wouldn’t work in front of a whole-workshop audience. We need to create more of this kind of discussion, ideally by replacing the usual anemic post-talk Q&A. Maybe we can get seating at small round tables for the next WAX, or we could steal Lindsey Kuper’s card-based Q&A idea from POPL OBT.
- We need to remind speakers to skip their motivation slides for this venue.
Next year, my co-chairs and I want to get the WAX franchise more organized. Haphazardly cobbling things together one year at a time has been fun, but the workshop is getting bigger and more serious. We should assign real roles, like program chair and publicity chair, which means we’ll need more help. Please get in touch if you’d like to get involved—and thanks to everyone who already volunteered!
See you at WAX 2017.