Constrained Markov decision-models with weighted discounted rewards

Constrained Markov decision-models with weighted discounted rewards

0.00 Avg rating0 Votes
Article ID: iaor20041741
Country: United States
Volume: 20
Issue: 2
Start Page Number: 302
End Page Number: 320
Publication Date: May 1995
Journal: Mathematics of Operations Research
Authors: ,
Keywords: programming: dynamic, programming: linear
Abstract:

This paper deals with constrained optimization of Markov Decision Processes. Both objective function and constraints are sums of standard discounted rewards, but each with a different discount factor. Such models arise, e.g., in production and in applications involving multiple time scales. We prove that if a feasible policy exists, then there exists an optimal policy which is (i) stationary (nonrandomized) from some step onward, (ii) randomized Markov before this step, but the total number of actions which are added by randomization is bounded by the number of constraints. Optimality of such policies for multi-criteria problems is also established. These new policies have the pleasing aesthetic property that the amount of randomization they require over any trajectory is restricted by the number of constraints. This result is new even for constrained optimization with a single discount factor, where the optimality of randomized stationary policies is known. However, a randomized stationary policy may require an infinite number of randomizations over time. We also formulate a linear programming algorithm for approximate solutions of constrained weighted discounted models.

Reviews

Required fields are marked *. Your email address will not be published.