Skip to main content
Article
Developing Automated Deceptions and the Impact on Trust
School of Computer Science & Engineering Faculty Publications
  • Frances Grodzinsky, Sacred Heart University
  • Keith W. Miller, University of Missouri - St Louis
  • Marty J. Wolf, Bemidji State University
Document Type
Peer-Reviewed Article
Publication Date
3-1-2015
Abstract

As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.

Comments

Published first online: February 18 2014.

DOI
10.1007/s13347-014-0158-7
Citation Information

Grodzinsky, F., Miller, K.W., & Wolf, M.J. (2015). Developing automated deceptions and the impact on trust. Philosophy & Technology 28(1), 91-105. doi: 10.1007/s13347-014-0158-7