Article ID: | iaor2012215 |
Volume: | 51 |
Issue: | 1 |
Start Page Number: | 51 |
End Page Number: | 75 |
Publication Date: | Jan 2012 |
Journal: | Computational Optimization and Applications |
Authors: | Thomas Douglas, Freimer Michael, Linderoth Jeffrey |
Keywords: | programming: linear, statistics: sampling |
Stochastic linear programs can be solved approximately by drawing a subset of all possible random scenarios and solving the problem based on this subset, an approach known as sample average approximation (SAA). The value of the objective function at the optimal solution obtained via SAA provides an estimate of the true optimal objective function value. This estimator is known to be optimistically biased; the expected optimal objective function value for the sampled problem is lower (for minimization problems) than the optimal objective function value for the true problem. We investigate how two alternative sampling methods, antithetic variates (AV) and Latin Hypercube (LH) sampling, affect both the bias and variance, and thus the mean squared error (MSE), of this estimator. For a simple example, we analytically express the reductions in bias and variance obtained by these two alternative sampling methods. For eight test problems from the literature, we computationally investigate the impact of these sampling methods on bias and variance. We find that both sampling methods are effective at reducing mean squared error, with Latin Hypercube sampling outperforming antithetic variates. For our analytic example and the eight test problems we derive or estimate the condition number as defined in Shapiro et al. (Math. Program. 94:1–19, 2002). We find that for ill‐conditioned problems, bias plays a larger role in MSE, and AV and LH sampling methods are more likely to reduce bias.