Skip to main content
Article
On the unreliability of bug severity data
Empirical Software Engineering
  • Yuan TIAN, Singapore Management University
  • Nasir ALI, University of Waterloo
  • David LO, Singapore Management University
  • Ahmed E. HASSAN, Queen’s University - Kingston
Publication Type
Journal Article
Version
publishedVersion
Publication Date
12-2015
Abstract

Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports data that is stored in software repositories. However, all prior efforts assume that such human-assigned data is reliable. Hence a perfect automated approach should be able to assign the same severity label as in the repository – achieving a 100% accuracy. Looking at duplicate bug reports (i.e., reports referring to the same problem) from three open-source software systems (OpenOffice, Mozilla, and Eclipse), we find that around 51 % of the duplicate bug reports have inconsistent human-assigned severity labels even though they refer to the same software problem. While our results do indicate that duplicate bug reports have unreliable severity labels, we believe that they send warning signals about the reliability of the full bug severity data (i.e., including non-duplicate reports). Future research efforts should explore if our findings generalize to the full dataset. Moreover, they should factor in the unreliable nature of the bug severity data. Given the unreliable nature of the severity data, classical metrics to assess the accuracy of models/learners should not be used for assessing the accuracy of approaches for automated assigning severity label. Hence, we propose a new approach to assess the performance of such models. Our new assessment approach shows that current automated approaches perform well – 77-86 % agreement with human-assigned severity labels.

Keywords
  • Bug report management,
  • Data quality,
  • Noise prediction,
  • Performance evaluation,
  • Severity prediction
Identifier
10.1007/s10664-015-9409-1
Publisher
Springer Verlag (Germany)
Copyright Owner and License
Authors
Creative Commons License
Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International
Additional URL
http://doi.org/10.1007/s10664-015-9409-1
Citation Information
Yuan TIAN, Nasir ALI, David LO and Ahmed E. HASSAN. "On the unreliability of bug severity data" Empirical Software Engineering Vol. 21 Iss. 6 (2015) p. 2298 - 2323 ISSN: 1382-3256
Available at: http://works.bepress.com/david_lo/298/