Article ID: | iaor20053229 |
Country: | Netherlands |
Volume: | 160 |
Issue: | 3 |
Start Page Number: | 614 |
End Page Number: | 637 |
Publication Date: | Feb 2005 |
Journal: | European Journal of Operational Research |
Authors: | Grard Pierre, Meyer Jean-Arcady, Sigaud Olivier |
Keywords: | programming: dynamic |
Learning Classifier Systems (LCS) are rule based Reinforcement Learning (RL) systems which use a generalization capability. In this paper, we highlight the differences between two kinds of LCSs. Some are used to directly perform RL while others latently learn a model of the interactions between the agent and its environment. Such a model can be used to speed up the core RL process. Thus, these two kinds of learning processes are complementary. We show here how the notion of generalization differs depending on whether the system anticipates (like Anticipatory Classifier System (ACS) and Yet Another Classifier System (YACS) or not (like XCS). Moreover, we show some limitations of the formalism common to ACS and YACS, and propose a new system, called Modular Anticipatory Classifier System (MACS), which allows the latent learning process to take advantage of new regularities. We describe how the model can be used to perform active exploration and how this exploration may be aggregated with the policy resulting from the reinforcement learning process. The different algorithms are validated experimentally and some limitations in presence of uncertainties are highlighted.