We were motivated to write a revised and expanded version of *Matrix
Computation* for two reasons. First, after five years of teaching from the old edition
and hearing from colleagues, it became clear that much of what we wrote could be written
better. Many of the proofs and derivations have been clarified and we have adopted a
stylized Matlab notation that facilitates matrix/vector thinking. Another new stylistic
feature of the new edition is the classification of material down to the subsection level.
This should make the volume easier to use as a textbook and as a reference.

The second reason for the new edition has to do with the blossoming of the parallel matrix computation area. The multiprocessor is revolutionizing the role of computation in science and engineering and it is crucial that we document the contributions of numerical linear algebra to this process. We do so in a machine-independent way that stresses high-level algorithmic ideas in preference to specific implementations. As we say in the new chapter on parallel matrix computations, the field is very fluid and the literature is overwhelmed with case studies. Nevertheless, there are a handful of key algorithmic developments that are likely to be with us for a long time and these are most definitely ready for textbook level discussion.

Here is a chapter-by-chapter tour of what is new in the Second Edition:

In Chapter 1 (Matrix Multiplication Problems) we introduce notation and fundamental concepts using matrix multiplication as an example. Block matrix manipulation is given a very high profile and we have added a new section on vector pipeline computing.

In Chapter 2 (Matrix Analysis) we have concentrated all the mathematical background required for the derivation and analysis of least squares and linear system algorithms.

In Chapter 3 (General Linear Systems) and Chapter 4 (Special Linear Systems) there is new material on how to organize for ``high performance'' the Gaussian elimination and Cholesky procedures. Emphasis is placed on implementations that are rich in matrix-vector and matrix-matrix multiplication. A subsection on positive semi-definite matrices has been added.

Chapter 5 (Orthogonalization and Least Squares) has new material on block Householder computations. We have also adjusted the order of presentation so as to decouple the discussion of orthogonal factorizations from the discussion linear least squares fitting.

In Chapter 6 (Parallel Matrix Computations) we use the gaxpy operation to introduce distributed memory and shared memory computation. The organization of matrix multiplication and various matrix factorizations in these two multiprocessing environments is also discussed.

In Chapter 7 (The Unsymmetric Eigenvalue Problem) we added a subsection on block Hessenberg reductions.

In Chapter 8 (The Symmetric Eigenvalue Problem) has two more fundamental alterations. The Jacobi method section was completely redone in light of recent developments in parallel computing. We also added a section on a new highly parallel divide and conquer algorithm for the tridiagonal problem.

In Chapter 9 (The Lanczos Method) we have added a subsection on the Arnoldi method. Chapter 10 (Iterative Methods for Linear Systems) includes an expanded discussion of the preconditioned conjugate gradient algorithm. The only significant change in Chapter 11 (Functions of Matrices) and Chapter 12 (Special Topics) is a new subsection in Section 12.6 on hyperbolic downdating.

We are deeply indebted to the many individuals who have pointed out the typos, mistakes, and expository shortcomings of the First Edition. It has been a pleasure to deal with such an interested and friendly readership. From this large group of correspondents we would like to thank A. Bjorck, J. Bunch, J. Dennis, T. Ericsson, O. Hald, N. Higham, A. Laub, R. Le Veque, M. Overton, B.N. Parlett, R. Plemmons, R. Skeel, E. Stickel, S. Van Huffel, and R.S. Varga for their particularly detailed and cogent remarks.

We also wish to acknowledge the contributions of individuals and organizations that have had a critical role to play in the production and shaping of this second edition. At the LaTeX level we were assisted by Alex Aiken, Chris Bischof, Gil Neiger, and Hal Perkins at Cornell and by Mark Kent at Stanford. Cindy Robinson-Hubbell at Cornell machine-coded the first edition and was absolutely indispensible during all subsequent phases of production. The research facilities made available by Iain Duff at Harwell Laboratory in the United Kingdom and Bill Morton at Oxford University were essential to the revision process.

Iain Duff, Bo Kagstrom, Chris Paige, and Nick Trefethen each contributed to the work in profound philosophical and technical ways. Our sincere gratitude goes to these friends and colleagues. Nick Higham carefully read the entire manuscript and offered countless suggestions. The publication of the Second Edition would have been much delayed and much inferior without Nick's help.

Finally, we wish to acknowledge the many contributions of Jim Wilkinson, our very dear colleague who passed away in 1986. Jim continues to be a great inspiration and it is with pleasure that we dedicate the new volume to both him and Alston Householder.

| Books | Papers | Research | Biographical | Miscellaneous | Top | Home |