Article ID: | iaor200948537 |
Country: | United States |
Volume: | 31 |
Issue: | 3 |
Start Page Number: | 490 |
End Page Number: | 512 |
Publication Date: | Aug 2006 |
Journal: | Mathematics of Operations Research |
Authors: | Renault Jrme |
Keywords: | markov processes |
We consider a two–player zero–sum game, given by a Markov chain over a finite set of states and a family of matrix games indexed by states. The sequence of states follows the Markov chain. At the beginning of each stage, only Player 1 is informed of the current state, then the corresponding matrix game is played, and the actions chosen are observed by both players before proceeding to the next stage. We call such a game a Markov chain game with lack of information on one side. This model generalizes the model of Aumann and Maschler of zero–sum repeated games with lack of information on one side (which corresponds to the case where the transition matrix of the Markov chain is the identity matrix). We generalize the proof of Aumann and Maschler and, from the definition and the study of appropriate nonrevealing auxiliary games with infinitely many stages, show the existence of the uniform value. An important difference with Aumann and Maschler's model is that here the notions for Player 1 of using the information and revealing a relevant information are distinct.