Skip to main content
Article
Coarse Ethics: How to Ethically Assess Explainable Artificial Intelligence
AI and Ethics (2021)
  • Takashi Izumo, Nihon University
  • Yueh-Hsuan Weng, Tohoku University
Abstract
The integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed to enable user-interpretability. The extant research has focussed on technical issues, but it is also desirable to apply a branch of ethics to deal with the trade-off problem. This scholarly domain is labelled coarse ethics in this study, which discusses two issues vis-à-vis AI prediction as a type of evaluation. First, which formal conditions would allow trade-offs? The study posits two minimal requisites: adequately high coverage and order-preservation. The second issue concerns conditions that could justify the trade-off between computable accuracy and human interpretability, to which the study suggests two justification methods: impracticability and adjustment of perspective from machine-computable to human-interpretable. This study contributes by connecting ethics to autonomous systems for future regulation by formally assessing the adequacy of AI
rationales.
Keywords
  • Explainable AI,
  • AI Ethics,
  • Robot Ethics,
  • Human-Robot Interaction,
  • Autonomous Vehicles
Publication Date
Fall September 7, 2021
Citation Information
Takashi Izumo and Yueh-Hsuan Weng. "Coarse Ethics: How to Ethically Assess Explainable Artificial Intelligence" AI and Ethics (2021)
Available at: http://works.bepress.com/weng_yueh_hsuan/129/