Article ID: | iaor19982005 |
Country: | Netherlands |
Volume: | 86 |
Issue: | 3 |
Start Page Number: | 549 |
End Page Number: | 564 |
Publication Date: | Nov 1995 |
Journal: | European Journal of Operational Research |
Authors: | Serin Yasemin |
Keywords: | programming: dynamic, programming: nonlinear, decision theory |
The concept of partially observable Markov decision processes was born to handle the problem of lack of information about the state of a Markov decision process. If the state of the system is unknown to the decision maker then an obvious approach is to gather information that is helpful in selecting an action. This problem was already solved using the theory of Markov processes. We construct a nonlinear programming model for the same problem and develop a solution algorithm that turns out to be a policy iteration algorithm. The policies found this way are easier to use than the policies found by the existing method, although they have the same optimal objective value.