Skip to main content
Article
Legible Normativity for AI Alignment: The Value of Silly Rules
Proceedings of the AI Ethics and Society Conference (2019)
  • Dylan Hadfield-Menell
  • McKane Andrus, University of California - Berkeley
  • Gillian K Hadfield
Abstract
It has become commonplace to assert that autonomous agents will have to be built to follow human rules of behavior–social norms and laws. But human laws and norms are complex and culturally varied systems; in many cases agents will have to learn the rules. This requires autonomous agents to have models of how human rule systems work so that they can make reliable predictions about rules. In this paper we contribute to the building of such models by analyzing an overlooked distinction between important rules and what we call silly rules —rules with no discernible direct impact on welfare. We show that silly rules render a normative system both more robust and more adaptable in response to shocks to perceived stability. They make normativity more legible for humans, and can increase legibility for AI systems as well. For AI systems to integrate into human normative systems, we suggest, it may be important for them to have models that include representations of silly rules.
Keywords
  • artificial intelligence,
  • value alignment,
  • social order,
  • culture,
  • norms,
  • AI ethics and society
Publication Date
March, 2019
Citation Information
Dylan Hadfield-Menell, McKane Andrus and Gillian K Hadfield. "Legible Normativity for AI Alignment: The Value of Silly Rules" Proceedings of the AI Ethics and Society Conference Iss. 2019 (2019)
Available at: http://works.bepress.com/ghadfield/67/