Article ID: | iaor1998356 |
Country: | United Kingdom |
Volume: | 29 |
Issue: | 1 |
Start Page Number: | 114 |
End Page Number: | 137 |
Publication Date: | Mar 1997 |
Journal: | Advances in Applied Probability |
Authors: | Sennott Linn I. |
Keywords: | statistics: decision, cybernetics |
This paper studies the expected average cost control problem for discrete-time Markov decision processes with denumerably infinite state spaces. A sequence of finite state space truncations is defined such that the average costs and average optimal policies in the sequence converge to the optimal average cost and an optimal policy in the original process. The theory is illustrated with several examples from the control of discrete-time queueing systems. Numerical results are discussed.