Article ID: | iaor201111398 |
Volume: | 1 |
Issue: | 2 |
Start Page Number: | 253 |
End Page Number: | 279 |
Publication Date: | Jun 2011 |
Journal: | Dynamic Games and Applications |
Authors: | Jakiewicz Anna, Nowak Andrzej S |
Keywords: | control, game theory, programming: dynamic, stochastic processes |
We study a discounted maxmin control problem with general state space. The controller is unsure about his model in the sense that he also considers a class of approximate models as possibly true. The objective is to choose a maxmin strategy that will work under a range of different model specifications. This is done by dynamic programming techniques. Under relatively weak conditions, we show that there is a solution to the optimality equation for the maxmin control problem as well as an optimal strategy for the controller. These results are applied to the theory of optimal growth and the Hansen–Sargent robust control model in macroeconomics. We also study a class of zero‐sum discounted stochastic games with unbounded payoffs and simultaneous moves and give a brief overview of recent results on stochastic games with weakly continuous transitions and the limiting average payoffs.