Discounted Continuous‐Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates

Discounted Continuous‐Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates

0.00 Avg rating0 Votes
Article ID: iaor20113642
Volume: 36
Issue: 1
Start Page Number: 105
End Page Number: 132
Publication Date: Feb 2011
Journal: Mathematics of Operations Research
Authors: ,
Keywords: markov processes
Abstract:

This paper deals with denumerable continuous‐time Markov decision processes (MDP) with constraints. The optimality criterion to be minimized is expected discounted loss, while several constraints of the same type are imposed. The transition rates may be unbounded, the loss rates are allowed to be unbounded as well (from above and from below), and the policies may be history‐dependent and randomized. Based on Kolmogorov's forward equation and Dynkin's formula, we remind the reader about the Bellman equation, introduce and study occupation measures, reformulate the optimization problem as a (primary) linear program, provide the form of optimal policies for a constrained optimization problem here, and establish the duality between the convex analytic approach and dynamic programming. Finally, a series of examples is given to illustrate all of our results.

Reviews

Required fields are marked *. Your email address will not be published.