Article ID: | iaor20164507 |
Volume: | 41 |
Issue: | 4 |
Start Page Number: | 1448 |
End Page Number: | 1468 |
Publication Date: | Nov 2016 |
Journal: | Mathematics of Operations Research |
Authors: | Arlotto Alessandro, Steele J Michael |
Keywords: | programming: markov decision, programming: dynamic, markov processes |
We prove a central limit theorem for a class of additive processes that arise naturally in the theory of finite horizon Markov decision problems. The main theorem generalizes a classic result of Dobrushin for temporally nonhomogeneous Markov chains, and the principal innovation is that here the summands are permitted to depend on both the current state and a bounded number of future states of the chain. We show through several examples that this added flexibility gives one a direct path to asymptotic normality of the optimal total reward of finite horizon Markov decision problems. The same examples also explain why such results are not easily obtained by alternative Markovian techniques such as enlargement of the state space.