Skip to main content
Article
Rotten Green Tests
2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
  • Julien Delplanque, Univ. Lille, CNRS, France
  • Stéphane Ducasse, RMOD - Inria Lille, France
  • Guillermo Polito, Univ. Lille, CNRS, France
  • Andrew P. Black, Portland State University
  • Anne Etien, Univ. Lille, CNRS, France
Document Type
Citation
Publication Date
5-1-2019
Abstract

Unit tests are a tenant of agile programming methodologies, and are widely used to improve code quality and prevent code regression. A green (passing) test is usually taken as a robust sign that the code under test is valid. However, some green tests contain assertions that are never executed. We call such tests Rotten Green Tests. Rotten Green Tests represent a case worse than a broken test: they report that the code under test is valid, but in fact do not test that validity. We describe an approach to identify rotten green tests by combining simple static and dynamic call-site analyses. Our approach takes into account test helper methods, inherited helpers, and trait compositions, and has been implemented in a tool called DrTest. DrTest reports no false negatives, yet it still reports some false positives due to conditional use or multiple test contexts. Using DrTest we conducted an empirical evaluation of 19,905 real test cases in mature projects of the Pharo ecosystem. The results of the evaluation show that the tool is effective; it detected 294 tests as rotten-green tests that contain assertions that are not executed. Some rotten tests have been “sleeping” in Pharo for at least 5 years.

DOI
10.1109/ICSE.2019.00062
Persistent Identifier
https://archives.pdx.edu/ds/psu/34757
Publisher
IEEE
Citation Information
Delplanque, J., Ducasse, S., Polito, G., Black, A. P., & Etien, A. (2019). Rotten Green Tests. Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/icse.2019.00062