Skip to main content
Article
When Machine Learning Goes Off the Rails
Harvard Business Review (2021)
  • Sara Gerke, Penn State Dickinson Law
  • I. Glenn Cohen, Harvard University
  • Theodoros Evgeniou, INSEAD
Abstract

Products and services that rely on machine learning—computer programs that constantly absorb new data and adapt their decisions in response—don’t always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside.

Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake.

A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended.
Publication Date
January 1, 2021
Citation Information
Sara Gerke, I. Glenn Cohen and Theodoros Evgeniou. "When Machine Learning Goes Off the Rails" Harvard Business Review (2021)
Available at: http://works.bepress.com/sara-gerke/135/