Article ID: | iaor20032215 |
Country: | United States |
Volume: | 48 |
Issue: | 5 |
Start Page Number: | 607 |
End Page Number: | 624 |
Publication Date: | May 2002 |
Journal: | Management Science |
Authors: | Sox Charles R., Treharne James T. |
This paper examines several different policies for an inventory control problem in which the demand process is nonstationary and partially observed. The probability distribution for the demand in each period is determined by the state of a Markov chain, the core process. However, the state of this core process is not directly observed, only the actual demand is observed by the decision maker. Given this demand process, the inventory control problem is a composite-state, partially observed Markov decision process (POMDP), which is an appropriate model for a number of dynamic demand problems. In practice, managers often use certainty equivalent control (CEC) policies to solve such a problem. However, this paper presents results that demonstrate that there are other practical control policies that almost always provide much better solutions for this problem than the CEC policies commonly used in practice. The computational results also indicate how specific problem characteristics influence the performance of each of the alternative policies.