Skip to main content
Article
Evaluating the Reliability of the Human Factors Analysis and Classification System
Aerospace Medicine and Human Performance (2015)
  • Tara N. Cohen
  • Douglas A. Wiegmann
  • Scott A. Shappell, Embry-Riddle Aeronautical University
Abstract
INTRODUCTION: This paper examines the reliability of the Human Factors Analysis and Classification System (HFACS) as tool for coding human error and contributing factors associated with accidents and incidents.

METHODS: A systematic review of articles published across a 13-yr period between 2001 and 2014 revealed a total of 14 peer-reviewed manuscripts that reported data concerning the reliability of HFACS.

RESULTS: Results revealed that the majority of these papers reported acceptable levels of interrater and intrarater reliability.

CONCLUSION: Reliability levels were higher with increased training and sample sizes. Likewise, when deviations from the original framework were minimized, reliability levels increased. Future applications of the framework should consider these factors to ensure the reliability and utility of HFACS as an accident analysis and classification tool.
Keywords
  • HFACS,
  • Human Factors Analysis and Classification System,
  • human error,
  • accident analysis,
  • reliability,
  • error analysis
Publication Date
August, 2015
DOI
https://doi.org/10.3357/AMHP.4218.2015
Citation Information
Tara N. Cohen, Douglas A. Wiegmann and Scott A. Shappell. "Evaluating the Reliability of the Human Factors Analysis and Classification System" Aerospace Medicine and Human Performance Vol. 86 Iss. 8 (2015) p. 728 - 735 ISSN: 2375-6314
Available at: http://works.bepress.com/scott-shappell/10/