Article ID: | iaor200930033 |
Country: | United States |
Volume: | 31 |
Issue: | 3 |
Start Page Number: | 597 |
End Page Number: | 620 |
Publication Date: | Aug 2006 |
Journal: | Mathematics of Operations Research |
Authors: | Farias Daniela Pucci de, Van Roy Benjamin |
Keywords: | programming: linear, markov processes |
We introduce a new algorithm based on linear programming for optimization of average–cost Markov decision processes (MDPs). The algorithm approximates the differential cost function of a perturbed MDP via a linear combination of basis functions. We establish a bound on the performance of the resulting policy that scales gracefully with the number of states without imposing the strong Lyapunov condition required by its counterpart in de Farias and Van Roy (2003). We investigate implications of this result in the context of a queueing control problem.