Keshav K. Pingali

Ph.D. Massachusetts Institute of Technology, 1986

My research group works on programming languages and compiler technology for program understanding, restructuring, and optimization. Our goal is to develop the algorithms and tools that are required to raise the level of abstraction at which people program computers, freeing them

from having to worry about low-level details of machine architectures, memory hierarchies, security policies, etc.

Our recent work has focused on technology for optimizing the performance of programs running on machines with deep memory hierarchies. The compiler community has developed many techniques to address this problem, but the code produced by using these techniques cannot compete with hand-optimized code in libraries like LAPACK. We invented a radically different approach called data-shackling, and showed that it outperforms those in the literature for many important scientific benchmarks. This technology has recently been incorporated into the SGI MIPSPro compiler product line.

A highlight of this year’s research activity is the invention of fractal symbolic analysis, a powerful program analysis technique that permits compilers to restructure complicated programs far beyond the reach of conventional compilers that use dependence analysis. Traditional symbolic analysis is powerful but it is intractable for most programs. To circumvent this problem, fractal symbolic analysis analyzes a program and its transformed version by repeatedly simplifying these programs until symbolic analysis becomes tractable, ensuring that equality of simplified programs is sufficient to guarantee equality of the original programs. We have shown that this approach is adequate for restructuring codes like LU factorization with pivoting which have not yielded to previous techniques in the literature. The fractal approach to analysis is likely to prove useful for proving other program properties.

We are continuing our work on next-generation generic programming techniques. In our approach, algorithm implementors use a different API than data structure designers, and the gap between these APIs is bridged by a compiler. One view of this approach is that it exploits restructuring compiler technology to perform a novel kind of template instantiation. We are demonstrating the usefulness of this new technology by deploying it in a system that generates efficient sparse codes from high-level algorithms and specifications of sparse matrix formats.

These ongoing projects build on our earlier work on restructuring compilation technology. Our group implemented one of the first compilers that generated code for distributed memory machines, starting from sequential shared memory programs. We introduced techniques called runtime resolution and owner-computes rule, which have now become standard in the area. Our work on linear loop transformations for enhancing parallelism and locality has been incorporated by Hewlett-Packard into its entire compiler product line. We also developed fast algorithms for program analysis problems such as computing the control dependence relation, the static single assignment form of a program, and dataflow analyses. Many of these algorithms have been incorporated into commercial and research compilers.

We also participate in a large multi-disciplinary project on fracture simulation on parallel computers. Other researchers working in this project are Tony Ingraffea (Civil Engineering), Steve Vavasis, Gao Guong-Rong (CS, Delaware), Nikos Chrisochoides (CS, Notre Dame), Paul Chew, and Paul Stodghill. This group is developing guaranteed-quality parallel mesh generators, new preconditioners for sparse linear systems solvers, sparse matrix compilation techniques, and multi-threaded runtime systems.

University Activities

Member: Computing Policy Committee; Cornell Theory Center Computing Allocation Committee.

Professional Activities

Program Committee Member: HiPC’00: International Conference on High-Performance Computing; LCR2000: 5th Workshop on Languages, Compilers, and Run-time Systems for Scalable Computers, PPoPP 99; ACM Conference on Principles and Practice of Parallel Programming.

Consultant: Silicon Graphics Inc.

Referee/Reviewer: ACM TOPLAS; IEEE Trans. Computers; Journal of Parallel and Distributed Computing; Journal of Supercomputing; IEEE Computer, Software Practice and Experience.

Editorial Board Member: Int. Journal of Parallel Programming; Discrete Mathematics and Theoretical Computer Science.


Improving cache performance of scientific codes. IBM T.J. Watson Research Center, Yorktown Heights, NY, November 2000.

High-performance Computing: Ongoing and future challenges. University of Rochester, Rochester, NY, May 2000.

Fractal Symbolic Analysis. IBM T.J. Watson Research Center, Yorktown Heights, NY. April 2000.

—. Department of Computer Science, M.I.T., Cambridge, MA. June 2000.

—. Hewlett-Packard Laboratories, Cambridge, MA. June 2000.


“A Case for Source-Level Transformations in MATLAB.” The 2nd Conference on Domain-Specific Languages (1999), 35–45 (with V. Menon).

“Next-generation Generic Programming and its Application to Sparse Matrix Computations.” International Conference on Supercomputing, 2000, 88–99 (with N. Mateev, P. Stodghill, and V. Kotlyar).

“Synthesizing Transformations for Locality Enhancement of Imperfectly-nested Loop Nests.” International Conference on Supercomputing 2000, 141–152 (with N. Ahmed and N. Mateev).

“Parallel FEM Simulation of Crack Propagation - Challenges, Status, and Perspectives.” Irregular 2000, 55–63 (with B. Carter et al.).

“Automatic Generation of Block-Recursive Codes.” EUROPAR 2000, 125–134 (with N. Ahmed).

“Left-looking to Right-looking and vice versa: An Application of Fractal Symbolic Analysis to Linear Algebra Code Restructuring.” EUROPAR 2000, 155–164 (with N. Mateev and V. Menon).

“Fractal Symbolic Analysis for Program Transformations.” Technical Report TR2000-1781, Cornell Computer Science Dept. (2000) (with N. Mateev and V. Menon).

“Tiling Imperfectly-nested Loop Nests.” Technical Report TR2000-1782, Cornell Computer Science Dept. (2000) (with N. Ahmed and N.Mateev).

“Compiling Imperfectly-nested Sparse Matrix Codes with Dependences.” Technical Report TR2000-1788, Cornell Computer Science Dept. (2000) (with N. Ahmed, N. Mateev, and P. Stodghill).