PPoPP'05

ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming

 

Chicago, Illinois, June 15-17, 2005

 

 

Session 1: Welcome and Keynote (Wednesday, June 15, 1:30-3:00)

 

Welcome to PPoPP'05  

Keshav Pingali, Katherine Yelick, and Andrew Grimshaw

Why is Graphics Hardware so Fast?
Pat Hanrahan (Stanford University)

Session 2: Compiling Parallel Languages (Wednesday, June 15, 3:30-5:30)

Chair: Lawrence Rauchwerger

 

Compiler Techniques for High Performance Sequentially Consistent Java Programs

Zehra Sura, Xing Fang, Chi-Leung Wong, Samuel P. Midkiff, Jaejin Lee and David Padua 

 

Effective Communication Coalescing for Data Parallel Applications

Daniel Chavarria-Miranda and John Mellor-Crummey

 

A Linear-Time Algorithm for Optimal Barrier Placement

Alain Darte and Robert Schreiber

 

An Evaluation of Global Address Space Languages: CoArray Fortran and Unified Parallel C

Daniel Chavarria-Miranda, Cristian Coarfa, Yuri Dotsenko, John Mellor-Crummey, Francois Cantonnet, Tarek El-Ghazawi, Ashrujit Mohanty, Yiyi Yao

 

Session 3: Synchronization Models (Thursday, June 16, 8:30-10:00)

Chair: Brad Chamberlain

 

Composable Memory Transactions

 Tim Harris, Simon Marlow, Simon Peyton Jones and Maurice Herlihy

 

Static Analysis of Atomicity for Programs with Lock-Free Synchronization

 Liqiang Wang and Scott D. Stoller

 

Revocable locks for non-blocking programming

Tim Harris and Keir Fraser

 

Session 4: Verification (Thursday, June 16, 10:30-12:00)

Chair:  Maurice Herlihy

 

            Static Analysis of Atomicity for Programs with Non-Blocking Synchronization

            Amit Sasturkar, Rahul Agarwal, Liqiang Wang and Scott D. Stoller

 

            Modeling Wildcard-Free MPI Programs for Verification

            Stephen F. Siegel and George S. Avrunin

 

            Scaling Model Checking of Dataraces Using Dynamic Information

            Ohad Shacham, Mooly Sagiv and Assaf Schuster

 

Session 5: Invited panel  (Thursday, June 16, 1:30-3:00)

Chair: Katherine Yelick

 

Language Innovations for High Productivity Computing Systems

 

As part the DARPA-sponsored “High Productivity Computing Systems” program, three new languages are being designed with the goal of improving programmer productivity at high performance computing.  The languages are designed for very large-scale parallelism and may take advantage of features of the systems that are also under design by each of the three companies.  Panelists will give an overview of these new languages and some ideas about how the language features will be used in parallel applications.  The panelists will be:

 

Brad Chamberlain from Cray, Inc. on the Chapel language

Vijay Saraswat from IBM on the X10 language

David Chase from Sun Microsystems on the Fortress language

 

 

Session 6: Automatic Parallelization (Thursday, June 16, 3:30-5:30)
 Chair:  John Mellor-Crummey

 

A Novel Approach for Partitioning Iteration Spaces with Variable Densities

Arun Kejariwal, Alexandru Nicolau, Utpal Banerjee, and Constantine D. Polychronopoulos

 

Automatic Multithreading and Multiprocessing of C Programs for IXP

 Long Li, Bo Huang, Jinquan Dai, and Luddy Harrison

 

Exposing Speculative Thread Parallelism in SPEC2000

Manohar K. Prabhu, Kunle Olukotun

 

Extracting SMP Parallelism for Dense Linear Algebra Algorithms from High-Level Specifications

Tze Meng Low, Robert A. van de Geijn, and Field G. Van Zee

 

 

Session 7: Energy-Aware Computing (Friday, June 17, 8:30-10:00)

Chair:  Andrew Grimshaw

 

Using Multiple Energy Gears in MPI Programs on a Power-Scalable Cluster

Vincent W. Freeh, David K. Lowenthal, Feng Pan, and Nandani Kappiah

 

Exposing Disk Layout to Compiler for Reducing Energy Consumption of Parallel Disk Based Systems

S. W. Son, G. Chen, M. Kandemir, and A. Choudhary

 

Energy Conservation in Heterogeneous Server Clusters

Taliver Heath, Bruno Diniz, Enrique V. Carrera, Wagner Meira Jr., and Ricardo Bianchini

 

Session 8: Testing and Fault Tolerance (Friday, June 17, 10:30-12:00)

Chair:  Rob Schreiber

 

Trust but Verify: Monitoring Remotely Executing Programs for Progress and Correctness

Shuo Yang, Ali R. Butt, Y.Charlie Hu and Samuel P. Midkiff

 

Applications of Synchronization Coverage

Arkady Bron, Eitan Farchi, Yonit Magid, Yarden Nir, Shmuel Ur

 

Fault Tolerant High Performance Computing by Coding Approaches

Zizhong Chen, Graham E. Fagg, Edgar Gabriel, Julien Langou, Thara Angskun, George Bosilca, and Jack Dongarra

 

Session 9: Architecture and Systems (Friday, June 17, 1:30-3:00) 

Chair:  Sam Midkiff

 

Teleport Messaging for Distributed Stream Programs

William Thies, Michal Karczmarek, Janis Sermulins, Rodric Rabbah, and Saman Amarasinghe

 

Adaptive Execution Techniques for SMT Multiprocessor Architectures

Changhee Jung, Daeseob Lim, Jaejin Lee, and SangYong Han

 

System-Wide Performance Monitors and their Application to the Optimization of Coherent Memory Accesses

Jeff Collard, Norm Jouppi and Sami Yehia

 

Session 10: Libraries and Applications (Friday, June 17, 3:30-5:30)

Chair:  Michelle Strout

 

A Sampling-based Framework for Parallel Data Mining

Shengnan Cong, Jiawei Han, Jay Hoeflinger and David Padua

 

Performance Modeling and Optimization of Parallel Out-of-Core Tensor Contractions

 Xiaoyang Gao, Swarup Kumar Sahoo, Qingda Lu, Gerald Baumgartner, Chi-Chung Lam, J. Ramanujam, and P. Sadayappan

 

A Framework for Adaptive Algorithm Selection in STAPL

Nathan Thomas, Gabriel Tanase, Olga Tkachyshyn, Jack Perdue, Nancy M. Amato, and Lawrence Rauchwerger

 

Locality Aware Dynamic Load Management for Massively Multiplayer Games

Jin Chen, Baohua Wu, Margaret Delap, Bjorn Knutsson, Honghui Lu, and Cristiana Amza