Markov process
Published:
A Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. A Markov process can be thought of as ‘memoryless’: loosely speaking, a process satisfies the Markov property if one can make predictions for the future of the process based solely on its present state just as well as one could knowing the process’s full history. i.e., conditional on the present state of the system, its future and past are independent.
A Markov process could be totally described by a transition information. The transition information assigns the probability of transition from one state to the other possible states or values.
A well-known specific case is the Markov chains in which we have a discrete-time discrete-valued process, defined only by its values and the transition matrix.
See also
Wiener process, Levy process, Gamma process, Poisson process
Material
- Introduction to Markov Chains on YouTube
- Markov Chains chapter in American Mathematical Society’s introductory probability book
- Chapter 5: Markov Chain Models
Papers
- Rogers, L. C. G., Williams, D., & Wiley, J. (1987). Diffusions, Markov processes and Martingales, vol 2: Ito calculus.
- Ikeda, N., Nagasawa, M., & Watanabe, S. (1968). Branching markov processes i. Journal of Mathematics of Kyoto University, 8(2), 233-278.
- Baum, L. E., & Petrie, T. (1966). Statistical inference for probabilistic functions of finite state Markov chains. The annals of mathematical statistics, 37(6), 1554-1563.