An Approximate Dynamic Programming Algorithm for Monotone Value Functions

An Approximate Dynamic Programming Algorithm for Monotone Value Functions

0.00 Avg rating0 Votes
Article ID: iaor20164711
Volume: 63
Issue: 6
Start Page Number: 1489
End Page Number: 1511
Publication Date: Dec 2015
Journal: Operations Research
Authors: ,
Keywords: programming: dynamic, medicine, energy
Abstract:

Many sequential decision problems can be formulated as Markov decision processes (MDPs) where the optimal value function (or cost‐to‐go function) can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm (i.e., backward induction or value iteration), may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate dynamic programming (ADP). We propose a provably convergent ADP algorithm called Monotone‐ADP that exploits the monotonicity of the value functions to increase the rate of convergence. In this paper, we describe a general finite‐horizon problem setting where the optimal value function is monotone, present a convergence proof for Monotone‐ADP under various technical assumptions, and show numerical results for three application domains: optimal stopping, energy storage/allocation, and glycemic control for diabetes patients. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations, using up to two orders of magnitude less computation than is needed to compute the optimal solution exactly.

Reviews

Required fields are marked *. Your email address will not be published.