Splitting Randomized Stationary Policies in Total‐Reward Markov Decision Processes

Splitting Randomized Stationary Policies in Total‐Reward Markov Decision Processes

0.00 Avg rating0 Votes
Article ID: iaor2012794
Volume: 37
Issue: 1
Start Page Number: 129
End Page Number: 153
Publication Date: Feb 2012
Journal: Mathematics of Operations Research
Authors: ,
Abstract:

This paper studies a discrete‐time total‐reward Markov decision process (MDP) with a given initial state distribution. A (randomized) stationary policy can be split on a given set of states if the occupancy measure of this policy can be expressed as a convex combination of the occupancy measures of stationary policies, each selecting deterministic actions on the given set and coinciding with the original stationary policy outside of this set. For a stationary policy, necessary and sufficient conditions are provided for splitting it at a single state as well as sufficient conditions for splitting it on the whole state space. These results are applied to constrained MDPs. The results are refined for absorbing (including discounted) MDPs with finite state and actions spaces. In particular, this paper provides an efficient algorithm that presents the occupancy measure of a given policy as a convex combination of the occupancy measures of finitely many (stationary) deterministic policies. This algorithm generates the splitting policies in a way that each pair of consecutive policies differs at exactly one state. The results are applied to constrained problems to efficiently compute an optimal policy by computing and splitting a stationary optimal policy.

Reviews

Required fields are marked *. Your email address will not be published.