Skip to main content
Article
Human-Centered Design to Address Biases in Artificial Intelligence
Journal of Medical Internet Research
  • Ellen W. Clayton, Vanderbilt University Law School
  • You Chen, Vanderbilt University Medical Center
  • Laurie L. Novak, Vanderbilt University Medical Center
  • Shilo Anders, Vanderbilt University Medical Center
  • Bradley Malin, Vanderbilt University Medical Center
Document Type
Article
Publication Date
2-1-2023
Keywords
  • artificial intelligence,
  • human-centered AI,
  • biomedical,
  • research,
  • patient,
  • health
Abstract

The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.

Citation Information
Ellen W. Clayton, You Chen, Laurie L. Novak, Shilo Anders, et al.. "Human-Centered Design to Address Biases in Artificial Intelligence" Journal of Medical Internet Research (2023) p. doi: 10.2196/43251 ISSN: 1439-4456
Available at: http://works.bepress.com/ellen-clayton/44/