Article ID: | iaor1990284 |
Country: | United States |
Volume: | 37 |
Issue: | 5 |
Start Page Number: | 780 |
End Page Number: | 790 |
Publication Date: | Sep 1989 |
Journal: | Operations Research |
Authors: | Ross Keith W., Varadarajan R. |
Keywords: | markov processes |
The authors consider time-average Markov Decision Processes (MDPs), which accumulate a reward and cost at each decision epoch. A policy meets the sample-path constraint if the time-average cost is below a specified value with probability one. The optimization problem is to maximize the expected average reward over all policies that meet the sample-path constraint. The sample-path constraint is compared with the more commonly studied constraint of requiring the average expected cost to be less than a specified value. Although the two criteria are equivalent for certain classes of MDPs, their feasible and optimal policies differ for many nontrivial problems. In general, there does not exist optimal or nearly optimal stationary policies when the expected average-cost constraint is employed. Assuming that a policy exists that meets the sample-path constraint, the authors establish that there exist nearly optimal stationary policies for communicating NDPs. A parametric linear programming algorithm is given to construct nearly optimal stationary policies. The discussion relies on well known results from the theory of stochastic processes and linear programming. The techniques lead to simple proofs of the existence of optimal and nearly optimal stationary policies for unichain and deterministic MDPs, respectively.