Infinite horizon programs: Convergence of approximate solutions

Infinite horizon programs: Convergence of approximate solutions

0.00 Avg rating0 Votes
Article ID: iaor19911690
Country: Switzerland
Volume: 29
Start Page Number: 333
End Page Number: 350
Publication Date: Apr 1991
Journal: Annals of Operations Research
Authors: ,
Abstract:

This paper deals with infinite horizon, dynamic programs, stated in discrete time, and afflicted by no uncertainty. The essential objective, to be minimized, is the accumulated value of all discounted future costs, and it is assumed to satisfy the crucial condition that every lower level set is bounded with respect to a certain norm. That norm, as well as the natural space of trajectories, is problem intrinsic. In contrast to standard Markov decision processes (MDP) the authors admit unbounded single-period cost functions and exponential growth within an unlimited state space. Also, no assumption about stationarity in problem data is made. The authors show, under broad hypotheses, that any minimizing sequence accumulates to points which solve the dynamic program optimally. This result is important for the study of approximation schemes.

Reviews

Required fields are marked *. Your email address will not be published.