Simplex Algorithm for Countable-State Discounted Markov Decision Processes

Simplex Algorithm for Countable-State Discounted Markov Decision Processes

0.00 Avg rating0 Votes
Article ID: iaor20173088
Volume: 65
Issue: 4
Start Page Number: 1029
End Page Number: 1042
Publication Date: Aug 2017
Journal: Operations Research
Authors: , , ,
Keywords: optimization, programming: markov decision, combinatorial optimization, heuristics, inventory, queues: applications, control, programming: linear
Abstract:

We consider discounted Markov decision processes (MDPs) with countably‐infinite state spaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs are inventory management and queueing control problems in which there is no specific limit on the size of inventory or queue. Existing solution methods obtain a sequence of policies that converges to optimality in value but may not improve monotonically, ie., a policy in the sequence may be worse than preceding policies. Our proposed approach considers countably‐infinite linear programming (CILP) formulations of the MDPs (a CILP is defined as a linear program (LP) with countably‐infinite numbers of variables and constraints). Under standard assumptions for analyzing MDPs with countably‐infinite state spaces and unbounded rewards, we extend the major theoretical extreme point and duality results to the resulting CILPs. Under additional mild assumptions, which are satisfied by several applications of interest, we present a simplex‐type algorithm that is implementable in the sense that each of its iterations requires only a finite amount of data and computation. We show that the algorithm finds a sequence of policies that improves monotonically and converges to optimality in value. Unlike existing simplex‐type algorithms for CILPs, our proposed algorithm solves a class of CILPs in which each constraint may contain an infinite number of variables and each variable may appear in an infinite number of constraints. A numerical illustration for inventory management problems is also presented. The online appendix is available at https://doi.org/10.1287/opre.2017.1598.

Reviews

Required fields are marked *. Your email address will not be published.