Skip to main content
Article
Understanding and avoiding AI failures: A practical guide
Faculty Scholarship
  • Robert Williams, University of Louisville
  • Roman Yampolskiy, University of Louisville
Document Type
Article
Publication Date
9-1-2021
Department
Computer Engineering and Computer Science
Abstract

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.

DOI
10.3390/philosophies6030053
ORCID
0000-0001-9637-1161
Citation Information

Williams, R.; Yampolskiy, R. Understanding and Avoiding AI Failures: A Practical Guide. Philosophies 2021, 6, 53. https://doi.org/10.3390/philosophies6030053