Skip to main content
Article
Balancing out Bias: Achieving Fairness Through Balanced Training
arXiv
  • Xudong Han, The University of Melbourne, Australia
  • Timothy Baldwin, The University of Melbourne, Australia & Mohamed bin Zayed University of Artificial Intelligence
  • Trevor Cohn, The University of Melbourne, Australia
Document Type
Article
Abstract

Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups. Dataset balancing has been shown to be effective at mitigating bias, however existing approaches do not directly account for correlations between author demographics and linguistic variables, limiting their effectiveness. To achieve Equal Opportunity fairness, such as equal job opportunity without regard to demographics, this paper introduces a simple, but highly effective, objective for countering bias using balanced training. We extend the method in the form of a gated model, which incorporates protected attributes as input, and show that it is effective at reducing bias in predictions through demographic input perturbation, outperforming all other bias mitigation techniques when combined with balanced training. Copyright © 2021, The Authors. All rights reserved.

DOI
10.48550/arXiv.2109.08253
Publication Date
9-16-2021
Keywords
  • Modeling languages,
  • Natural language processing systems,
  • Syntactics,
  • Balance methods,
  • Demographic variables,
  • Error rate,
  • Evaluation methods,
  • Linguistic variable,
  • Minority groups,
  • Model learning,
  • Modeling task,
  • Re-weighting,
  • Syntactic parsing,
  • Population statistics,
  • Computation and Language (cs.CL)
Comments

IR Deposit conditions: non-described

Preprint available on arXiv

Citation Information
X. Han, T. Baldwin, and T. Cohn, "Balancing out Bias: Achieving Fairness Through Balanced Training", 2022, arXiv:2109.08253