Reinforcement Learning in Robust Markov Decision Processes

Reinforcement Learning in Robust Markov Decision Processes

0.00 Avg rating0 Votes
Article ID: iaor20164502
Volume: 41
Issue: 4
Start Page Number: 1325
End Page Number: 1353
Publication Date: Nov 2016
Journal: Mathematics of Operations Research
Authors: , ,
Keywords: learning, programming: markov decision, behaviour, optimization
Abstract:

An important challenge in Markov decision processes (MDP) is to ensure robustness with respect to unexpected or adversarial system behavior. A standard paradigm to tackle this challenge is the robust MDP framework that models the parameters as arbitrary elements of pre‐defined ‘uncertainty sets,’ and seeks the minimax policy–the policy that performs the best under the worst realization of the parameters in the uncertainty set. A crucial issue of the robust MDP framework, largely unaddressed in literature, is how to find appropriate description of the uncertainty in a principled data‐driven way. In this paper we address this problem using an online learning approach: we devise an algorithm that, without knowing the true uncertainty model, is able to adapt its level of protection to uncertainty, and in the long run performs as well as the minimax policy as if the true uncertainty model is known. Indeed, the algorithm achieves similar regret bounds as standard MDP where no parameter is adversarial, which shows that with virtually no extra cost we can adapt robust learning to handle uncertainty in MDPs. To the best of our knowledge, this is the first attempt to learn uncertainty in robust MDPs.

Reviews

Required fields are marked *. Your email address will not be published.