Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains

Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains

0.00 Avg rating0 Votes
Article ID: iaor1990759
Country: Israel
Volume: 27
Issue: 1
Start Page Number: 1
End Page Number: 7
Publication Date: Mar 1990
Journal: Journal of Applied Probability
Authors: , ,
Abstract:

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, the authors consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. They first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to •. The authors then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.

Reviews

Required fields are marked *. Your email address will not be published.