An empirical study of policy convergence in Markov decision process value iteration

An empirical study of policy convergence in Markov decision process value iteration

0.00 Avg rating0 Votes
Article ID: iaor20052291
Country: United Kingdom
Volume: 32
Issue: 1
Start Page Number: 127
End Page Number: 142
Publication Date: Jan 2005
Journal: Computers and Operations Research
Authors: ,
Keywords: programming: dynamic
Abstract:

The value iteration algorithm is a well-known technique for generating solutions to discounted Markov decision process (MDP) models. Although simple to implement, the approach is nevertheless limited in situations where many Markov decision processes must be solved, such as in real-time state-based control problems or in simulation/optimization problems, because of the potentially large number of iterations required for the value function to converge to an ϵ-optimal solution. Experimental results suggest, however, that the sequence of solution policies associated with each iteration of the algorithm converges much more rapidly than does the value function. This behavior has significant implications for designing solution approaches for MDPs, yet it has not been explicitly characterized in the literature nor generated significant discussion. This paper seeks to generate such discussion by providing comparative empirical convergence results and exploring several predictors that allow estimation of policy convergence speed based on existing MDP parameters.

Reviews

Required fields are marked *. Your email address will not be published.