An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes

An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes

0.00 Avg rating0 Votes
Article ID: iaor20123945
Volume: 153
Issue: 3
Start Page Number: 688
End Page Number: 708
Publication Date: Jun 2012
Journal: Journal of Optimization Theory and Applications
Authors: ,
Keywords: programming: markov decision
Abstract:

We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long‐run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy‐dependent long‐run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi‐stage queueing network with constraints on long‐run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

Reviews

Required fields are marked *. Your email address will not be published.