Our research project is concerned with the design and understanding of practical and efficient numerical algorithms for continuous optimization problems. Our primary focus has been on the development of algorithms for large-scale optimization: exploitation of sparsity and use of parallelism have been two dominant themes.
Many large-scale optimization problems display sparsity in both second derivative (Hessian) matrices and constraint matrices. Efficient optimization methods must exploit this fact. In recent years we have worked on this issue from a variety of directions resulting in several successful and original contributions. These include the development of techniques for the efficient estimation of sparse Jacobian and Hessian matrices (including complete theoretical analysis and high-quality software); an original analysis of the sparse null basis problem (given a sparse rectangular matrix with more columns than rows, determine a sparse (or compact) representation for the null space) and the development of algorithms for finding a sparse null basis, and new direct-factorization methods for solving large and sparse bound-constrained quadratic programming problems.
The computational demands of many large-scale optimization problems can be extremely high; therefore, it is important to explore the practical potential of parallelism in the solution of optimization problems. We have been active in this arena in recent years, having considered several important computational problems related to optimization, including the fast parallel solution of triangular systems of equations; the parallel solution of general systems of nonlinear equa-tions; and the parallel solution of nonlinear least-squares problems.
The development of effective, reliable, and efficient algorithms for problems with nonlinear constraints is still an active research area Ñ significant progress has been made in the last ten years, but the state of affairs is still unsatisfactory. With Wei Yuan, a graduating PhD student in applied mathematics, we have developed new quasi-Newton and large-scale approaches.
Our recent research has dealt primarily with large-scale optimization problems with bound constraints on the variables. Such problems are very common, and effective methods for use in the large-scale setting are needed. We are investigating an entirely new approach: interior Newton methods. The underlying ideas in this effort evolved from work (primarily with colleague Yuying Li) on second-order methods for solving various convex optimization problems. Our new approach does not require convexity and appears to be generally applicable. With graduating PhD student Mary Ann Branch, we have pursued generalizations of this approach to large-scale nonlinear optimization problems with sound constraints on the variables. Continuing work involves adaptation of these approaches to exploit parallelism and implementation on the IBM SP-2 supercomputer.
If you have questions or comments please contact: email@example.com.