Decomposable Markov Decision Processes: A Fluid Optimization Approach

Decomposable Markov Decision Processes: A Fluid Optimization Approach

0.00 Avg rating0 Votes
Article ID: iaor2017655
Volume: 64
Issue: 6
Start Page Number: 1537
End Page Number: 1555
Publication Date: Dec 2016
Journal: Operations Research
Authors: ,
Keywords: programming: markov decision, optimization, stochastic processes
Abstract:

Decomposable Markov decision processes (MDPs) are problems where the stochastic system can be decomposed into multiple individual components. Although such MDPs arise naturally in many practical applications, they are often difficult to solve exactly due to the enormous size of the state space of the complete system, which grows exponentially with the number of components. In this paper, we propose an approximate solution approach to decomposable MDPs that is based on re‐solving a fluid linear optimization formulation of the problem at each decision epoch. This formulation tractably approximates the problem by modeling transition behavior at the level of the individual components rather than the complete system. We prove that our fluid formulation provides a tighter bound on the optimal value function than three state‐of‐the‐art formulations: the approximate linear optimization formulation, the classical Lagrangian relaxation formulation, and a novel, alternate Lagrangian relaxation that is based on relaxing an action consistency constraint. We provide a numerical demonstration of the effectiveness of the approach in the area of multiarmed bandit problems, where we show that our approach provides near optimal performance and outperforms state‐of‐the‐art algorithms.

Reviews

Required fields are marked *. Your email address will not be published.