Programming backgammon using self-teaching neural nets

Programming backgammon using self-teaching neural nets

0.00 Avg rating0 Votes
Article ID: iaor20032936
Country: United States
Volume: 134
Issue: 1/2
Start Page Number: 181
End Page Number: 199
Publication Date: Jan 2002
Journal: Artificial Intelligence
Authors:
Keywords: optimization, neural networks, artificial intelligence
Abstract:

TD-Gammon is a neural network that is able to teach itself to play backgammon solely by playing against itself and learning from the results. Starting from random initial play, TD-Gammon's self-teaching methodology results in a surprisingly strong program: without lookahead, its positional judgement rivals that of human experts, and when combined with shallow lookahead, it reaches a level of play that surpasses even the best human players. The success of TD-Gammon has also been replicated by several other programmers; at least two other neural net programs also appear to be capable of superhuman play. Previous papers on TD-Gammon have focused on developing a scientific understanding of its reinforcement learning methodology. This paper views machine learning as a tool in a programmer's toolkit, and considers how it can be combined with other programming techniques to achieve and surpass world-class backgammon play. Particular emphasis is placed on programming shallow-depth search algorithms, and on TD-Gammon's doubling algorithm, which is described in print here for the first time.

Reviews

Required fields are marked *. Your email address will not be published.