Skip to main content
Unpublished Paper
Human-focused Turing tests: A framework for judging nudging and techno-social engineering of human beings
ExpressO (2015)
  • Brett M. Frischmann
Abstract

This article makes two major contributions. First, it develops a methodology to investigate techno-social engineering of human beings. Second, it investigates the ongoing behavioral law and economics project of nudging, which is a particular form of techno-social engineering.

Many claim that technology dehumanizes, but this article is the first to develop a systematic approach to identifying when technologies dehumanize. The methodology depends on a fundamental and radical repurposing of the Turing test. The article develops an initial series of human-focused tests to examine different aspects of intelligence and distinguish humans from machines: (a) mathematical computation, (b) random number generation, (c) common sense, and (d) rationality. All four are plausible reverse Turing tests that generally could be used to distinguish humans and machines. Yet the first two do not implicate fundamental notions of what it means to be a human; the third and fourth do. When these latter two tests are passed, we have good reason to question and evaluate the humans and the techno-social environment within which they are situated.

This article applies insights from the common sense and rationality tests to evaluate the ongoing behavioral law and economics project of nudging us to become rational humans. Based on decades of findings from cognitive psychologists and behavioral economists, this project has influenced academics across many disciplines and public policies around the world. There are a variety of institutional means for implementing “nudges” to improve human decision making in contexts where humans tend to act irrationally or contrary to their own welfare. Cass Sunstein defines nudges more narrowly and carefully as “low-cost, choice-preserving, behaviorally informed approaches to regulatory problems, including disclosure requirements, default rules, and simplification.” These approaches tend to be transparent and more palatable. But there are other approaches, such as covert nudges like subliminal advertising. The underlying logic of nudging is to construct or modify the “choice architecture” or the environment within which humans make decisions. Yet as Lawrence Lessig made clear long ago, architecture regulates powerfully but subtly, and it can easily run roughshod over values that don’t matter to the architects. Techno-social engineering through (choice) architecture is rampant and will grow in scale and scope in the near future, and it demands close attention because of its subtle influence on both what people do and what people believe to be possible. Accordingly, this article evaluates nudging as a systematic agenda where institutional decisions about particular nudges aggregate and set a path that entails techno-social engineering of humans and society.

The article concludes with two true stories that bring these two contributions together. Neither is quite a story of dehumanization where humans become indistinguishable from machines. Rather, each is an example of an incremental step in that direction. The first concerns techno-social engineering of children’s preferences. It is the story of a simple nudge, implemented through the use of a wearable technology distributed in an elementary school for the purpose of encouraging fitness. The second concerns techno-social engineering of human emotions—the Facebook Emotional Contagion Experiment. It is not (yet) a conventional nudge, but it relies on the underlying logic of nudging. Both can be seen as steps along the same path. I further explore the path through a series of plausible extensions. The extensions reveal how a series of incremental steps, each of which is cost-benefit justified, may take us down a path that is unjustifiable.

Keywords
  • nudging,
  • law and economics,
  • social engineering,
  • rationality,
  • common sense,
  • privacy,
  • humanity
Publication Date
March 13, 2015
Citation Information
Brett M. Frischmann. "Human-focused Turing tests: A framework for judging nudging and techno-social engineering of human beings" ExpressO (2015)
Available at: http://works.bepress.com/brett_frischmann/14/