Markov decision processes with slow scale periodic decisions

Markov decision processes with slow scale periodic decisions

0.00 Avg rating0 Votes
Article ID: iaor20072030
Country: United States
Volume: 28
Issue: 4
Start Page Number: 777
End Page Number: 800
Publication Date: Nov 2003
Journal: Mathematics of Operations Research
Authors: , ,
Keywords: programming: dynamic
Abstract:

We consider a class of discrete time, dynamic decision-making models which we refer to as Periodically Time-Inhomogeneous Markov Decision Processes (PTMDPs). In these models, the decision-making horizon can be partitioned into intervals, called slow scale cycles, of N+1 epochs. The transition law and reward function are time-homogeneous over the first N epochs of each slow scale cycle, but distinct at the final epoch. The motivation for such models is in applications where decisions of different nature are taken at different time scales, i.e., many ‘low-level’ decisions are made between less frequent ‘high-level’ ones. For the PTMDP model, we consider the problem of optimizing the expected discounted reward when rewards devalue by a discount factor λ at the beginning of each slow scale cycle. When N is large, initially stationary policies (i.s.p.s) are natural candidates for optimal policies. Similar to turnpike policies, an initially stationary policy uses the same decision rule for some large number of epochs in each slow scale cycle, followed by a relatively short planning horizon of time-varying decision rules. In this paper, we characterize the form of the optimal value as a function of N, establish conditions ensuring the existence of near-optimal i.s.p.s, and characterize their structure. Our analysis deals separately with the cases where the time-homogeneous part of the system has state-dependent and state-independent optimal average reward. As we illustrate, the results in these two distinct cases are qualitatively different.

Reviews

Required fields are marked *. Your email address will not be published.