Solving semi-Markov decision problems using average reward reinforcement learning

Solving semi-Markov decision problems using average reward reinforcement learning

0.00 Avg rating0 Votes
Article ID: iaor20002266
Country: United States
Volume: 45
Issue: 4
Start Page Number: 560
End Page Number: 574
Publication Date: Apr 1999
Journal: Management Science
Authors: , , ,
Keywords: learning, markov processes
Abstract:

A large class of problems of sequential decision making under uncertainty, of which the underlying probability structure is a Markov process, can be modeled as stochastic dynamic programs (referred to, in general, as Markov decision problems or MDPs). However, the computational complexity of the classical MDP algorithms, such as value iteration and policy iteration, is prohibitive and can grow intractably with the size of the problem and its related data. Furthermore, these techniques require for each action the one step transition probability and reward matrices, and obtaining these is often unrealistic for large and complex systems. Recently, there has been much interest in a simulation-based stochastic approximation framework called reinforcement learning (RL), for computing near optimal policies for MDPs. RL has been successfully applied to very large problems, such as elevator scheduling, and dynamic channel allocation of cellular telephone systems. In this paper, we extend RL to a more general class of decision tasks that are referred to as semi-Markov decision problems. In particular, we focus on SMDPs under the average-reward criterion. We present a new model-free RL algorithm called SMART. We present a detailed study of this algorithm on a combinatorially large problem of determining the optimal preventive maintenance schedule of a production inventory system. Numerical results from both the theoretical model and the RL algorithm are presented and compared.

Reviews

Required fields are marked *. Your email address will not be published.