Skip to main content

Cornell University

GVL1_Preface

It can be argued that the “mission” of numerical analysis is to provide the scientific community with effective software tools. What makes this enterprise so interesting is that its participants require skills from both mathematics and computer science. Indeed, good software development demands a mathematical understanding of the problem to be solved, a flair for algorithmic expression, and an appreciation for finite precision arithmetic. The aim of this book is to provide the reader with these skills as they pertain to matrix computations.

Great progress has been made in this area since the mid-1950’s. This is borne out by the existence of quality programs for many linear equations, least squares, and eigenvalue problems. Typical are the routines in Eispack and Linpack, whose widespread use has had the effect of elevating the level of algorithmic thought in various applied areas. By using the programs in these packages as building blocks, scientists and engineers can piece together more complicated software tools that are tailored specifically for their needs. This development encourages the writing of well-structured programs, a welcome trend since more and more scientific research is manifested in software.

The impact of numerical linear algebra is felt in other ways. The habits of the field—our reliance on orthogonal matrices, our appreciation of problem sensitivity, our careful consideration of roundoff—have spilled over into many areas of research. A prime example of this is the increased use of the singular value decomposition (SVD) as an analytical tool by many statisticians and control engineers. People in these areas are reformulating numerous theoretical concepts in the “language” of the SVD and as a result are finding it much easier to implement their ideas in the presence of roundoff error and inexact data.

Further evidence of the growing impact of numerical linear algebra is in the area of hardware design. Recent developments in floating point arithmetic and parallel processing have in no small measure been provoked by activity in the matrix computation field.

We have written this book in order to impart a sense of unity to this expanding and exciting field. Much has been accomplished since the publication in 1965 of Wilkinson’s monumental treatise The Algebraic Eigenvalue Problem. Many of these modern developments have been discussed in survey articles and in specialized volumes such as Solving Least Squares Problems by Lawson and Hanson and The Symmetric Eigenvalue Problem by Parlett. We feel that the time is appropriate for a synthesis of this material. In this regard we see Matrix Computations as a comprehensive, somewhat more advanced version of Stewart’s Introduction to Matrix Computations.

We anticipate three categories of readers: graduate students in technical areas, computational scientists and engineers, and our colleagues in numerical analysis. We have included special features addressed to each group.

For students (and their instructors), we have included a large number of problems. Many of these are computational and can form the nucleus of a programming project. Our experience in teaching with the book is that Eispack and Linpack assignments greatly enliven its contents. Matlab, an easy-to-use system for performing matrix calculations, has also been used successfully in conjunction with this text.

For practicing engineers and scientists who wish to use the volume as a reference book, we have tried to minimize the interconnections among chapters. We have also included an annotated bibliography at the end of almost every section in order to hasten the search for reference material on any given topic.

For our colleagues in numerical analysis, we have sprinkled our algorithmic discussions with a generous amount of perturbation theory and error analysis. It is our intention to provide these readers with enough detail so that they can understand and solve the matrix problems that arise in their own work. Research in numerical linear algebra is frequently instigated by ongoing work in other areas of numerical analysis. For example, some of the best methods for solving sparse linear systems have been developed by researchers concerned with the numerical solution of partial differential equations. Similarly, it was the activity in the quasi-Newton area that prompted the development of techniques for updating various matrix factorizations.

This book took six years to write and underwent several title changes: (1) A Last Course in Matrix Computations, (2) Applied Matrix Computations, and (3) Advanced Matrix Computations. The first title, aside form being “cutesy,” is misleading. We do not pretend that our treatment of matrix computations is complete. In particular many topics in the vibrant area of sparse matrix computation were excluded from the text simply because we did not wish to delve into graph theory and data structures. The second title was dismissed because we do not dwell at great length on applications. For the most part, the matrix problems we consider are treated as given—their origins are not chased down. We recognize this as a pedagogic shortcoming but one which can be offset by an experienced teacher. Moreover, we suspect that many of our readers will be experienced themselves, thereby obviating the need for excessive motivation. Finally, the last title was dispensed with because the book does contain introductory material. We chose to include elementary topics for the sake of completeness and because we think that our approach to the rudiments of the subject will interest teachers of undergraduate numerical analysis.

What, then, is the book about if it is incomplete, less than applied, and not entirely advanced? A brief synopsis of its contents should answer this question.

The first three chapters contain the necessary background material. Matrix algebra is reviewed, and some key algorithms are established. The pace is rather brisk. Readers who struggle with the problems in these early chapters will no doubt struggle with the remainder of the book.

Much current research in numerical linear algebra focuses on problems in which the matrices have special structure, e.g., are large and sparse. The art of exploiting structure is the central theme of Chapter 5, where various special-purpose linear equation solvers are described.

Chapter 6 picks up on another trend in the field—the increased reliance on orthogonal matrices. We discuss several orthogonalization methods and show how they can be applied to the least squares problem. Special attention is paid to the handling of rank deficiency.

The all-powerful QR algorithm for the unsymmetric eigenvalue problem is the centerpiece of Chapter 7. Our pedagogic derivation should help to demystify this important technique. We also comment on invariant subspace calculation and the generalized eigenvalue problem.

In Chapter 8, we continue our discussion of the eigenvalue problem by focusing on the important symmetric case. We first describe the symmetric QR algorithm and then proceed to show how symmetry permits several alternative computational procedures.

Up to this point in the book, our treatment of sparsity is rather scattered. Banded linear system solvers are discussed in Chapter 5, simultaneous iteration is described in Chapter 7, Rayleigh quotient iteration in Chapter 8, and so on. Chapters 9 and 10, however, are entirely devoted to the solving of sparse matrix problems. The discussion revolves around the Lanczos method and its country cousin, the method of conjugate gradients. We show how various sparse eigenvalue, least squares, and linear equation problems can be solved using these important algorithms.

The purpose of the last two chapters in the book is to illustrate the wide applicability of the algorithms presented in earlier chapters. Chapter 11 deals with the problem of computing a function of a matrix, something that is frequently required in applications of control theory. Chapter 12 describes a selection of matrix problems, several of which highlight the power of the singular value decomposition.

Indeed, perhaps the most recurring theme in the book is the practical and theoretical value of this matrix decomposition. Its algorithmic and mathematical properties have a key role to play in nearly every chapter. In many respects Matrix Computations is an embellishment of the manuscript “Everything You Wanted to Know About the Singular Value Decomposition (But Were Afraid to Ask)” authored by our colleague Alan Cline.

A word is in order about references to available software. We have concentrated on Eispack and Linpack, and just about every subroutine in these packages is alluded to in the text. In addition, numerous “tech report” references that we cite in the annotated bibliographies are in fact references to software. It should be stressed, however, that we have no direct experience with much of the referenced software aside from Eispack and Linpack. Caveat emptor.

Many people assisted in the production of the book. Richard Bartels helped to gather references and to revise the first draft of some early chapters. The writing of this book was prompted by his organization of the Workshop in Matrix Computations held at Johns Hopkins University in August 1977.

Bob Plemmons, John Dennis, Alan Laub, and Don Heller taught from various portions of the text and provided numerous constructive criticisms. George Cybenko generously helped us with the section on Toeplitz matrices, while Bo Kagstrom offered many intelligent comments on our treatment of invariant subspace computation. Per-Ake Wedin diligently read early versions of Chapter 5 and expertly guided our revisions. Uri Ascher and Roger Horn have our gratitude for independently suggesting the book’s final title and so does an anonymous reviewer who made many valuable suggestions.

Last, but not least, we happily acknowledge the influence of our colleagues Cleve Moler and Pete Stewart. Their own work, which so perfectly captures the spirit of the field, has strongly affected the balance and style of our own presentation.