Article ID: | iaor20081453 |
Country: | United States |
Volume: | 54 |
Issue: | 3 |
Start Page Number: | 489 |
End Page Number: | 504 |
Publication Date: | May 2006 |
Journal: | Operations Research |
Authors: | Borkar V.S., Ahamed T.P.I., Juneja S. |
Keywords: | queues: theory, simulation |
For a discrete-time finite-state Markov chain, we develop an adaptive importance sampling scheme to estimate the expected total cost before hitting a set of terminal states. This scheme updates the change of measure at every transition using constant or decreasing step-size stochastic approximation. The updates are shown to concentrate asymptotically in a neighborhood of the desired zero-variance estimator. Through simulation experiments on simple Markovian queues, we observe that the proposed technique performs very well in estimating performance measures related to rare events associated with queue lengths exceeding prescribed thresholds. We include performance comparisons of the proposed algorithm with existing adaptive importance sampling algorithms on some examples. We also discuss the extension of the technique to estimate the infinite horizon expected discounted cost and the expected average cost.