Time-sharing policies for controlled Markov chains

Time-sharing policies for controlled Markov chains

0.00 Avg rating0 Votes
Article ID: iaor1995288
Country: United States
Volume: 41
Issue: 6
Start Page Number: 1116
End Page Number: 1124
Publication Date: Nov 1993
Journal: Operations Research
Authors: ,
Keywords: decision theory: multiple criteria, programming: dynamic
Abstract:

The authors propose a class of nonstationary policies called policy time sharing (PTS), which possesses several desirable properties for problems where the criteria are of the average-cost type; an optimal policy exists within this class, the computation of optimal policies is straightforward, and the implementation of this policy is easy. While in the finite state case stationary policies are also known to share these properties, the new policies are much more flexible, in the sense that they can be applied to solve adaptive problems, and they suggest new ways to incorporate the particular structure of the problem at-hand into the derivation of optimal policies. In addition, they provide insight into the pathwise-structure of controlled Markov chains. To use PTS policies one alternates between the use of several stationary deterministic policies, switching when reaching some predetermined state. In some (countable state) cases optimal solutions of the PTS type are available and easy to compute, whereas optimal stationary policies are not available. Examples that illustrate the last point and the usefulness of the new approach are discussed, involving constrained optimization problems with countable state space or compact action space.

Reviews

Required fields are marked *. Your email address will not be published.