Skip to main content
Article
Finite-state Markov Chains Obey Benford’s Law
SIAM Journal on Matrix Analysis and Applications (2011)
  • Arno Berger, University of Alberta
  • Theodore P. Hill, California Polytechnic State University - San Luis Obispo
  • Bahar Kaynar, Vrije Universiteit Brussel
  • Ad Ridder, Vrije Universiteit Brussel
Abstract

A sequence of real numbers (xn) is Benford if the significands, i.e., the fraction parts in the floating-point representation of (xn), are distributed logarithmically. Similarly, a discrete-time irreducible and aperiodic finite-state Markov chain with transition probability matrix P and limiting matrix P* is Benford if every component of both sequences of matrices (Pn−P*) and (Pn+1−Pn) is Benford or eventually zero. Using recent tools that established Benford behavior for finite-dimensional linear maps, via the classical theories of uniform distribution modulo 1 and Perron–Frobenius, this paper derives a simple sufficient condition (“nonresonance”) guaranteeing that P, or the Markov chain associated with it, is Benford. This result in turn is used to show that almost all Markov chains are Benford, in the sense that if the transition probability matrix is chosen in an absolutely continuous manner, then the resulting Markov chain is Benford with probability one. Concrete examples illustrate the various cases that arise, and the theory is complemented with simulations and potential applications.

Disciplines
Publication Date
July, 2011
Publisher Statement
Copyright © 2010 Society for Industrial and Applied Mathematics. The definitive version is available at http://dx.doi.org/10.1137/100789890.
Citation Information
Arno Berger, Theodore P. Hill, Bahar Kaynar and Ad Ridder. "Finite-state Markov Chains Obey Benford’s Law" SIAM Journal on Matrix Analysis and Applications Vol. 32 Iss. 3 (2011)
Available at: http://works.bepress.com/tphill/80/