Information processing in a three-actions dynamic decision model

Information processing in a three-actions dynamic decision model

0.00 Avg rating0 Votes
Article ID: iaor1996984
Country: Netherlands
Volume: 62
Issue: 3
Start Page Number: 282
End Page Number: 293
Publication Date: Nov 1992
Journal: European Journal of Operational Research
Authors: ,
Keywords: programming: dynamic
Abstract:

The authors introduce a discrete-time dynamic decision model with the goal to maximize the expected utility of the state of the finite planning horizon. In general three actions are available. Whereas actions 1 and 2 result in a stochastic transition of the state, action 3 is a stopping decision implying the deterministic revision of the state up to the planning horizon. Action 1 is a learning action. In contrast, when applying action 2 no additional information can be obtained about the unknown distribution of the stochastic outcomes. For the logarithmic utility function the authors derive conditions that guarantee structural properties of the optimal policy like monotonicity and a stopping rule. They also present numerical examples where the optimal policy shows a counter-intuitive behaviour. Furthermore the authors give conditions such that the general model with three actions reduces to a model where only two actions are relevant.

Reviews

Required fields are marked *. Your email address will not be published.