Article ID: | iaor2012966 |
Volume: | 2 |
Issue: | 1 |
Start Page Number: | 18 |
End Page Number: | 50 |
Publication Date: | Mar 2012 |
Journal: | Dynamic Games and Applications |
Authors: | Chasparis Georgios, Shamma Jeff |
Keywords: | simulation: applications |
We analyze reinforcement learning under so‐called ‘dynamic reinforcement.’ In reinforcement learning, each agent repeatedly interacts with an unknown environment (i.e., other agents), receives a reward, and updates the probabilities of its next action based on its own previous actions and received rewards. Unlike standard reinforcement learning, dynamic reinforcement uses a combination of long‐term rewards and recent rewards to construct myopically forward looking action selection probabilities. We analyze the long‐term stability of the learning dynamics for general games with pure strategy Nash equilibria and specialize the results for coordination games and distributed network formation. In this class of problems, more than one stable equilibrium (i.e., coordination configuration) may exist. We demonstrate equilibrium selection under dynamic reinforcement. In particular, we show how a single agent is able to destabilize an equilibrium in favor of another by appropriately adjusting its dynamic reinforcement parameters. We contrast the conclusions with prior game theoretic results according to which the risk‐dominant equilibrium is the only robust equilibrium when agents’ decisions are subject to small randomized perturbations. The analysis throughout is based on the ODE method for stochastic approximations, where a special form of perturbation in the learning dynamics allows for analyzing its behavior at the boundary points of the state space.