Skip to main content
Article
What Does Explainable AI Really Mean? a New Conceptualization of Perspectives
arXiv
  • Derek Doran, Wright State University - Main Campus
  • Sarah Schulz
  • Tarek R Besold
Document Type
Article
Publication Date
10-1-2017
Disciplines
Abstract

We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process.

Citation Information
Derek Doran, Sarah Schulz and Tarek R Besold. "What Does Explainable AI Really Mean? a New Conceptualization of Perspectives" arXiv (2017)
Available at: http://works.bepress.com/derek_doran/69/