I love reading each year’s IEEE Micro Top Picks special issue. It’s the lazy computer architect’s source for must-read papers from the last year.
For Top Picks 2014, Luis and Karin tried something new: they asked for community input during the selection process. Even a lowly grad student could submit short comments on each paper. This inevitably made me play “fantasy Top Picks committee” in my mind. And the other night, Tim suggested that folks should share their own Pitchforkesque year-end top-ten lists.
The architecture community needs more of this kind of research commentary. So let’s give this a shot.
My 2014 Top Picks
My favorite papers from the 2014 Top Picks slate (in alphabetical order):
“Aladdin: a Pre-RTL, Power-Performance Accelerator Simulator Enabling Large Design Space Exploration of Customized Architectures,” by Yakun Sophia Shao, Brandon Reagen, Gu-Yeon Wei, and David Brooks.
For the rare feat of publishing a no-holds-barred architecture research tool. The idea is simple: Aladdin mashes up unsound dynamic profiling with optimization heuristics and pattern matching to make guesses at a reasonable hardware design. It sidesteps the persistent weaknesses of HLS by solving a different problem. Extra points for inventing something useful that we didn’t know we needed.
“Race Logic: a Hardware Acceleration for Dynamic Programming Algorithms,” by Advait Madhavan, Timothy Sherwood, and Dmitri Strukov.
For expanding the definition of computation with a deeply wacky yet eminently implementable idea. Race logic exemplifies the balance between creativity and rigor that makes architecture research exciting.
“Flipping Bits in Memory Without Accessing Them,” by Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu.
For exposing a shocking and terrifying harbinger of the end of useful DRAM scaling.
“Heterogeneous-Race-Free Memory Models,” by Derek Hower, Blake Hechtman, Bradford Beckmann, Benedict Gaster, Mark Hill, Steven Reinhardt, and David Wood.
For asking the question: What is the minimum common memory consistency guarantee that heterogeneous CPU/GPU hybrid systems should enforce? Put another way, what is the equivalent to homogeneous multiprocessors’ sequential consistency for race-free programs baseline? I don’t agree with all of this paper’s answers, but I strongly agree with the question.
“Load Value Approximation,” by Joshua San Miguel, Mario Badr, Natalie Enright Jerger.
For a thorough design-space exploration that converges on an elegant, effective implementation of approximation. Approximate computing is close to my heart and LVA is an exemplary execution.
“Memory Persistency,” by Steven Pelley, Peter M. Chen, and Thomas F. Wenisch.
For drawing a connection between memory ordering in multiprocessors and problems in systems that mix non-volatile main memory with volatile caches. The future seems inevitable: systems will get non-volatile main memories, they will combine them with volatile on-chip state, and programmability bugbears will abound. As with the HRF paper above, I love memory persistency for the questions it poses.
“PipeCheck: Specifying and Verifying Microarchitectural Enforcement of Memory Consistency Models,” by Daniel Lustig, Michael Pellauer, and Margaret Martonosi.
“Q100: The Architecture and Design of a Database Processing Unit,” by Lisa Wu, Andrea Lottarini, Timothy Paine, Martha Kim, and Kenneth Ross.
For demonstrating an accelerator design workflow that emphasizes empiricism over intuition. The end result is a paragon of thorough accelerator evaluation.
I had clear conflicts with two great submissions to Top Picks. I can’t in good conscience list them above, but I also can’t go without mentioning them. Both are veritable landmarks:
“A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services,” by Andrew Putnam, Adrian Caulfield, Eric Chung, Derek Chiou, Kypros Constantinides, John Demme, Hadi Esmaeilzadeh, Jeremy Fowers, Gopi Prashanth Gopal, Jan Gray, Michael Haselman, Scott Hauck, Stephen Heil, Amir Hormati, Joo-Young Kim, Sitaram Lanka, Jim Larus, Eric Peterson, Simon Pope, Aaron Smith, Jason Thong, Phillip Yi Xiao, and Doug Burger.
“Uncertain<T>: A First-Order Type for Uncertain Data,” by James Bornholt, Todd Mytkowicz, and Kathryn S. McKinley.
These lists are subjective and noisy—I’d be shocked if your list is the same as mine or the official picks (whenever they’re announced). So concoct your own! And then email me so I can link to your post here.