Date Awarded

2007

Document Type

Dissertation -- Access Restricted On-Campus Only

Degree Name

Doctor of Philosophy (Ph.D.)

Department

Computer Science

Abstract

Iterative solvers for eigenvalue problems are often the only means of computing the extremal eigenvalues of large sparse eigenproblems that arise in many engineering and scientific applications. The solvers often demand a large portion of the computational cycles on scientific computing platforms. Current parallel implementations are limited in scalability, especially on collections of clusters interconnected via a hierarchy of networking infrastructure. Also, existing solvers are often effective at finding a small or large number of eigenvalues, but not necessarily both. The algorithms can also require fine-tuning and may even miss some of the required eigenvalues, making them insufficiently robust and unnecessarily difficult to use. We improve upon the current state-of-the-art with our innovations in multigrain parallelism and our research in multimethod solvers.;We developed a latency-tolerant technique, referred to as multigrain parallelism, by combining different granularities in a parallel implementation of the block Jacobi-Davidson algorithm. Block methods have traditionally been used to improve cache performance and to perform more floating-point operations between synchronizations on parallel computers. Multigrain parallelism is a different approach to latency tolerance that splits the processors into subgroups, each of which can then solve a correction equation for each block vector concurrently. We present results we obtained using our multigrain Jacobi-Davidson eigenvalue solver and show that multigrain parallelism is effective on both MPPs and collections of clusters.;We propose an efficient multimethod solver that improves robustness and ease of use. The solver will incorporate our theoretical and technological advancements described in this dissertation. These advancements focus primarily on near-optimal variants of the Jacobi Davidson method and include: alternative projection techniques that allow the solver to find a large number of eigenvalues more efficiently, a performance model for determining which of the two most competitive techniques should be used, and an asymptotic performance model for determining the relative behavior of our methods with other methods when a large number of eigenvalues is required. Through these models and extensive experimentation, an efficient and robust implementation is possible. We also developed an iterative validation algorithm that increases the confidence in the eigenvalues computed by any iterative solver, a serious drawback of iterative eigenvalue methods. The algorithm attempts to detect missed eigenvalues by reiterating the given solver with increasing block sizes and locking. Such eigenvalue software has been long awaited by users.

DOI

https://dx.doi.org/doi:10.21220/s2-fsmv-3812

Rights

© The Author

On-Campus Access Only

Share

COinS