Weighted Markov decision processes with perturbation

Weighted Markov decision processes with perturbation

0.00 Avg rating0 Votes
Article ID: iaor2003705
Country: Germany
Volume: 53
Issue: 3
Start Page Number: 465
End Page Number: 480
Publication Date: Jan 2001
Journal: Mathematical Methods of Operations Research (Heidelberg)
Authors: ,
Keywords: programming: dynamic
Abstract:

In this paper we consider the weighted reward Markov decision process, with perturbation. The ‘weighted reward’ refers to appropriately normalized convex combination of the discounted and the long-run average reward criteria. This criterion allows the controller to trade-off short-term costs versus long-term costs. In every application where both the discounted and the long-run average criteria have been proposed in the past, there is clearly a rationale for considering the weighted criterion. Of course, as with all Markov decision models, the standard weighted criterion model assumes that all the transition probabilities are known precisely. Since, in most applications this would not be the case, we consider the perturbed version of the weighted reward model. We prove that in most cases a nearly optimal control can be found in the class of relatively simple ‘ultimately deterministic’ controls. These are controls which behave just like deterministic stationary controls, after a certain point of time.

Reviews

Required fields are marked *. Your email address will not be published.