Finite-state Markov Chains Obey Benford’s LawMathematics arXiv
AbstractA sequence of real numbers (xn) is Benford if the significands, i.e. the fraction parts in the floating-point representation of (xn) are distributed logarithmically. Similarly, a discrete-time irreducible and aperiodic finite-state Markov chain with probability transition matrix P and limiting matrix P* is Benford if every component of both sequences of matrices (Pn - P*) and (Pn+1-Pn) is Benford or eventually zero. Using recent tools that established Benford behavior both for Newton's method and for finite-dimensional linear maps, via the classical theories of uniform distribution modulo 1 and Perron-Frobenius, this paper derives a simple sufficient condition (nonresonant) guaranteeing that P, or the Markov chain associated with it, is Benford. This result in turn is used to show that almost all Markov chains are Benford, in the sense that if the transition probabilities are chosen independently and continuously, then the resulting Markov chain is Benford with probability one. Concrete examples illustrate the various cases that arise, and the theory is complemented with several simulations and potential applications.
Citation InformationBabar Kaynar, Arno Berger, Theodore P. Hill and Ad Ridder. "Finite-state Markov Chains Obey Benford’s Law" Mathematics arXiv (2010)
Available at: http://works.bepress.com/tphill/34/