Article ID: | iaor19982436 |
Country: | United States |
Volume: | 6 |
Issue: | 1 |
Start Page Number: | 94 |
End Page Number: | 104 |
Publication Date: | Dec 1994 |
Journal: | ORSA Journal On Computing |
Authors: | Marsten Roy E., Saltzman Matthew J., Jung Ho-Won |
Keywords: | interior point methods |
Interior point algorithms for linear programming acheive significant reductions in computer time over earlier methods for many large linear programming problems (LPs) and solve problems larger than previously possible. The most computationally intensive step in each iteration of any interior point algorithm is the numerical factorization of a sparse, symmetric, positive definite matrix. In large or relatively dense problems, 80–90% or more of computational time is spent in this step. This study describes our implementations of two algorithms for performing this factorization, the column Cholesky and the multifrontal methods, based on graph theory applied to sparse symmetric matrices. We used advanced techniques such as loop unrolling and equivalent sparse matrix reordering to improve the performance of the factorization step. Our studies are incorporated into an implementation of the primal–dual barrier algorithm. Computational experiments on relatively large LPs on a DECstation 3100 demonstrate that the primal–dual barrier algorithm using our advanced column Cholesky outperforms by 20–60% the same algorithm using a straightforward column Cholesky. Also, our multifrontal method using the loop unrolling technique in a partial inner product routine shows a 10–50% speedup compared with no loop unrolling. The two methods exhibit comparable overall performance.