An upper bound on the loss from approximate optimal-value functions

An upper bound on the loss from approximate optimal-value functions

0.00 Avg rating0 Votes
Article ID: iaor1999816
Country: United States
Volume: 16
Issue: 3
Start Page Number: 227
End Page Number: 233
Publication Date: Jul 1994
Journal: Machine Learning
Authors: ,
Keywords: programming: dynamic
Abstract:

Many reinforcement learning approaches can be formulated using the theory of Markov decision processes and the associated method of dynamic programming (DP). The value of this theoretical understanding, however, is tempered by many practical concerns. One important question is whether DP-based approaches that use function approximation rather than lookup tables can avoid catastrophic effects on performance. This note presents a result of Bertsekas which guarantees that small errors in the approximation of a task's optimal value function cannot produce arbitrarily bad performance when actions are selected by a greedy policy. We derive an upper bound on performance loss that is slightly tighter than that in Bertsekas, and we show the extension of the bound to Q-learning. These results provide a partial theoretical rationale for the approximation of value functions, an issue of great practical importance in reinforcement learning.

Reviews

Required fields are marked *. Your email address will not be published.