The finiteness of the reward function and the optimal function in Markov decision processes

The finiteness of the reward function and the optimal function in Markov decision processes

0.00 Avg rating0 Votes
Article ID: iaor20001738
Country: Germany
Volume: 49
Issue: 2
Start Page Number: 255
End Page Number: 266
Publication Date: Jan 1999
Journal: Mathematical Methods of Operations Research (Heidelberg)
Authors: ,
Abstract:

This paper studies the discrete time Markov decision processes (MDP) with expected discounted total reward, where the state space is countable, the action space is measurable, the reward function is extended real-valued, and the discount rate may be any real number. Two conditions (GC) and (C) are presented, which are weaker than that presented in literature. By eliminating some worst actions, the state space S can be partitioned into sets S, S–∞, S0, on which the optimal value function equals +∞, –∞ or is finite, respectively. Furthermore, the validity of the optimality equation is shown when its right hand side is well defined, especially, when it is restricted to the subset S0. The reward function r(i, a) is finite and bounded above in a for each iS0, Finally, some sufficient conditions for (GC) and (C) are given.

Reviews

Required fields are marked *. Your email address will not be published.