Skip to main content
Article
Beyond kappa: A review of interrater agreement measures
The Canadian Journal of Statistics
  • Mousumi Banerjee
  • Michelle Capozzoli
  • Laura McSweeney, Fairfield University
  • Debajyoti Sinha
Document Type
Article
Publication Date
1-1-1999
Abstract

In 1960, Cohen introduced the kappa coefficient to measure chance‐corrected nominal scale agreement between two raters. Since then, numerous extensions and generalizations of this interrater agreement measure have been proposed in the literature. This paper reviews and critiques various approaches to the study of interrater agreement, for which the relevant data comprise either nominal or ordinal categorical ratings from multiple raters. It presents a comprehensive compilation of the main statistical approaches to this problem, descriptions and characterizations of the underlying models, and discussions of related statistical methodologies for estimation and confidence‐interval construction. The emphasis is on various practical scenarios and designs that underlie the development of these measures, and the interrelationships between them.

Comments

Copyright © 1999 Statistical Society of Canada, published by Wiley. A link to full-text has been provided for authorized subscribers.

Published Citation
Banerjee, M., Capozzoli, M., McSweeney, L. & Sinha, D. (1999). “Beyond kappa: A review of interrater agreement measures,” The Canadian Journal of Statistics, Vol. 27, No 1, pp. 3-23. https://doi.org/10.2307/3315487
DOI
10.2307/3315487
Citation Information
Mousumi Banerjee, Michelle Capozzoli, Laura McSweeney and Debajyoti Sinha. "Beyond kappa: A review of interrater agreement measures" The Canadian Journal of Statistics Vol. 27 Iss. 1 (1999)
Available at: http://works.bepress.com/laura-mcsweeney/4/