Stochastic game
Published:
Stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with probabilistic transitions between game states played by one or more players.
In each stage the payoffs are given in function of:
- State of the game.
- Action of the players.
In each new stage, the game state is randomly selected using a random transition information which depends on the previous state and the players actions.
The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
Stochastic games generalize both Markov decision processes and repeated games. In order to study these games it is used the concept of Markov perfect equilibrium. A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to stochastic games.
See also
Material
Papers
- Shapley, L. S. (1953). Stochastic games. PNAS 39 (10): 1095-1100.
- Vieille, N. (2002). Stochastic games: Recent results. Handbook of Game Theory. Amsterdam: Elsevier Science. pp. 1833-1850.
- Mertens, J. F. & Neyman, A. (1981). Stochastic Games. International Journal of Game Theory 10 (2): 53-66.
- Condon, A. (1992). The complexity of stochastic games. Information and Computation 96: 203-224.
- Hansen, E. A., Bernstein, D. S., & Zilberstein, S. (2004, July). Dynamic programming for partially observable stochastic games. In AAAI (Vol. 4, pp. 709-715).
- Dieckelmann, Jonas. (2013). The Complexity of Simple Stochastic Game. arXiv preprint arXiv:0704.2779
- Yamamo, Yuichi (2015). Stochastic Games with Hidden States. Journal of Economic Literature
- Dermed, L. M., & Isbell, C. L. (2009). Solving stochastic games. In Advances in Neural Information Processing Systems (pp. 1186-1194).