Article ID: | iaor20113617 |
Volume: | 59 |
Issue: | 1 |
Start Page Number: | 50 |
End Page Number: | 65 |
Publication Date: | Jan 2011 |
Journal: | Operations Research |
Authors: | Mandelbaum Avishai, Armony Mor |
Keywords: | scheduling, programming: dynamic, programming: markov decision, inventory |
An important problem in the theory of dynamic programming is that of characterizing sufficient conditions under which the optimal policies for Markov decision processes (MDPs) under the infinite‐horizon discounted cost criterion converge to an optimal policy under the average cost criterion as the discount factor approaches 1. In this paper, we provide, for stochastic inventory models, a set of such sufficient conditions. These conditions, unlike many others in the dynamic programming literature, hold when the action space is noncompact and the underlying transition law is weakly continuous. Moreover, we verify that these conditions hold for almost all conceivable single‐stage inventory models with few assumptions on cost and demand parameters. As a consequence of our analysis, we partially characterize, for the first time, optimal policies for the following inventory systems under the infinite‐horizon average‐cost criterion, which have thus far been a challenge: (a) capacitated systems with setup costs, (b) uncapacitated systems with convex ordering costs plus a setup cost, and (c) systems with lost sales and lead times.