Article ID: | iaor1994235 |
Country: | United States |
Volume: | 41 |
Issue: | 3 |
Start Page Number: | 583 |
End Page Number: | 599 |
Publication Date: | May 1993 |
Journal: | Operations Research |
Authors: | Lovejoy William S. |
Keywords: | programming: dynamic, inventory: order policies |
A parameter adaptive decision process is a sequential decision process where some parameter or parameter set impacting the rewards and/or transitions of the process is not known with certainty. Signals from the performance of the system can be processed by the decision maker as time progresses, yielding information regarding which parameter set is operative. Active learning is an essential feature of these processes, and the decision maker must choose actions that simultaneously guide the system in a preferred direction, as well as yield information that can be used to better prescribe future actions. If the operative parameter set is known with certainty, the parameter adaptive problem reduces to a conventional stochastic dynamic program, which is presumed solvable. Previous authors have shown how to use these solutions to generate suboptimal policies with performance bounds for the parameter adaptive problem. Here it is shown that some desirable characteristics of those bounds are shared by a larger class of functions than those generated from fully observed problems, and that this generalization allows for iterative tightening of the bounds in a manner that preserves those attributes. An example inventory stocking problem demonstrates the technique.