Average, sensitive and Blackwell optimal policies in denumerable Markov decision chains with unbounded rewards

Average, sensitive and Blackwell optimal policies in denumerable Markov decision chains with unbounded rewards

0.00 Avg rating0 Votes
Article ID: iaor1988282
Country: United States
Volume: 13
Issue: 3
Start Page Number: 395
End Page Number: 420
Publication Date: Aug 1988
Journal: Mathematics of Operations Research
Authors: ,
Abstract:

In this paper the authors consider a (discrete-time) Markov decision chain with a denumerable state space and compact action sets and they assume that for all states the rewards and transition probabilities depend continuously on the actions. The first objective of this paper is to develop an analysis for average optimality without assuming a special Markov chain structure. In doing so, the authors present a set of conditions guaranteeing average optimality, which are automatically fulfilled in the finite state and action model. The second objective is to study simultaneously average and discount optimality as Veinott did for the finite state and action model. The authors investigate the concepts of n-discount and Blackwell optimality in the denumerable state space, using a Laurent series expansion for the discounted rewards. Under the same condition as for average optimality, they establish solutions to the n-discount optimality equations for every n.

Reviews

Required fields are marked *. Your email address will not be published.