Skip to main content
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Journal of Applied Psychology (2015)
  • Joel Koopman, Michigan State University
  • Michael D. Howe, Michigan State University
  • John R. Hollenbeck, Michigan State University
  • Hock-Peng Sin, Florida International Univeristy
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials.
  • Mediation,
  • Bootstrapping,
  • Permutation,
  • Bayes,
  • Statistical power,
  • Type I error
Publication Date
January, 2015
Citation Information
Joel Koopman, Michael D. Howe, John R. Hollenbeck and Hock-Peng Sin. "Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals." Journal of Applied Psychology Vol. 100 Iss. 1 (2015) p. 194 - 202
Available at: