A new strong optimality criterion for nonstationary Markov decision processes

A new strong optimality criterion for nonstationary Markov decision processes

0.00 Avg rating0 Votes
Article ID: iaor20013553
Country: Germany
Volume: 52
Issue: 2
Start Page Number: 287
End Page Number: 306
Publication Date: Jan 2000
Journal: Mathematical Methods of Operations Research (Heidelberg)
Authors: , ,
Keywords: control processes
Abstract:

This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes, which is an extension of the canonical policies of Hernández-Lerma and Lasserre for the stationary Markov controlled processes. For the case of possibly non-uniformly bounded rewards and denumerable state space, we first construct, under some conditions, a solution to the optimality equations (OEs), and then prove that the Markov policies obtained from the OEs are not only optimal for the three average criteria but also optimal for all finite horizon criteria with a sequence of additional functions as their terminal rewards (i.e. strong average-canonical optimal). Also, some properties of optimal policies and optimal average value convergence are discussed. Moreover, the error bound in average reward between a rolling horizon policy and a strong average-canonical optimal policy is provided, and then a rolling horizon algorithm for computing strong average ϵ (>0)-optimal Markov policies is given.

Reviews

Required fields are marked *. Your email address will not be published.