Skip to main content
Article
How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
Numeracy (2017)
  • Edward Nuhfer
  • Steven Fleisher
  • Christopher Cogan
  • Karl Wirth
  • Eric Gaze
Abstract
Despite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produced by the paired measures that researchers employ to determine self-assessment accuracy. To illustrate the complications and ways to resolve them, we employ a large dataset (N = 1154) obtained from paired measures of documented reliability to study self-assessed proficiency in science literacy. We collected demographic information that allowed both criterion-referenced and normative-based analyses of self-assessment data. We used these analyses to propose a quantitatively based classification scale and show how its use informs the nature of self-assessment. Much of the current consensus about peoples' inability to self-assess accurately comes from interpreting normative data presented in the Kruger-Dunning type graphical format or closely related (y - x) vs. (x) graphical conventions. Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment. Our results further confirm that experts are more proficient in self-assessing their abilities than novices and that women, in general, self-assess more accurately than men. The validity of interpretations of data depends strongly upon how carefully the researchers consider the numeracy that underlies graphical presentations and conclusions. Our results indicate that carefully measured self-assessments provide valid, measurable and valuable information about proficiency.
Keywords
  • self-assessment,
  • self-assessment classification scale,
  • Dunning-Kruger Effect,
  • knowledge surveys,
  • graphs,
  • numeracy,
  • random number simulation,
  • noise,
  • signal
Disciplines
Publication Date
2017
DOI
10.5038/1936-4660.10.1.4
Citation Information
Edward Nuhfer, Steven Fleisher, Christopher Cogan, Karl Wirth, et al.. "How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives" Numeracy Vol. 10 Iss. 1 (2017) ISSN: 1936-4660
Available at: http://works.bepress.com/karl_wirth/28/