Exploiting the Structural Properties of the Underlying Markov Decision Problem in the Q-Learning Algorithm

Exploiting the Structural Properties of the Underlying Markov Decision Problem in the Q-Learning Algorithm

0.00 Avg rating0 Votes
Article ID: iaor200952615
Country: United States
Volume: 20
Issue: 2
Start Page Number: 288
End Page Number: 301
Publication Date: Mar 2008
Journal: INFORMS Journal On Computing
Authors: ,
Keywords: learning
Abstract:

This paper shows how to exploit the structural properties of the underlying Markov decision problem to improve the convergence behavior of the Q–learning algorithm. In particular, we consider infinite–horizon discounted–cost Markov decision problems where there is a natural ordering between the states of the system and the value function is known to be monotone in the state. We propose a new variant of the Q–learning algorithm that ensures that the value function approximations obtained during the intermediate iterations are also monotone in the state. We establish the convergence of the proposed algorithm and experimentally show that it significantly improves the convergence behavior of the standard version of the Q–learning algorithm.

Reviews

Required fields are marked *. Your email address will not be published.