Article ID: | iaor20113642 |
Volume: | 36 |
Issue: | 1 |
Start Page Number: | 105 |
End Page Number: | 132 |
Publication Date: | Feb 2011 |
Journal: | Mathematics of Operations Research |
Authors: | Guo Xianping, Piunovskiy Alexei |
Keywords: | markov processes |
This paper deals with denumerable continuous‐time Markov decision processes (MDP) with constraints. The optimality criterion to be minimized is expected discounted loss, while several constraints of the same type are imposed. The transition rates may be unbounded, the loss rates are allowed to be unbounded as well (from above and from below), and the policies may be history‐dependent and randomized. Based on Kolmogorov's forward equation and Dynkin's formula, we remind the reader about the Bellman equation, introduce and study occupation measures, reformulate the optimization problem as a (primary) linear program, provide the form of optimal policies for a constrained optimization problem here, and establish the duality between the convex analytic approach and dynamic programming. Finally, a series of examples is given to illustrate all of our results.