Skip to main content
Article
An Empirical Assessment of Bellon's Clone Benchmark
EASE '15: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, April 29
  • Alan CHARPENTIER, University of Bordeaux
  • Jean-Rémy FALLERI, University of Bordeaux
  • David LO, Singapore Management University
  • Laurent REVEILLERE, University of Bordeaux
Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
4-2015
Abstract

Context: Clone benchmarks are essential to the assessment and improvement of clone detection tools and algorithms. Among existing benchmarks, Bellon's benchmark is widely used by the research community. However, a serious threat to the validity of this benchmark is that reference clones it contains have been manually validated by Bellon alone. Other persons may disagree with Bellon's judgment. Objective: In this paper, we perform an empirical assessment of Bellon's benchmark. Method: We seek the opinion of eighteen participants on a subset of Bellon's benchmark to determine if researchers should trust the reference clones it contains. Results: Our experiment shows that a significant amount of the reference clones are debatable, and this phenomenon can introduce noise in results obtained using this benchmark.

Keywords
  • Code clone,
  • Empirical study,
  • Software metrics
ISBN
9781450333504
Identifier
10.1145/2745802.2745821
Publisher
ACM
City or Country
New York
Copyright Owner and License
Publisher
Creative Commons License
Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International
Additional URL
https://doi.org/10.1145/2745802.2745821
Citation Information
Alan CHARPENTIER, Jean-Rémy FALLERI, David LO and Laurent REVEILLERE. "An Empirical Assessment of Bellon's Clone Benchmark" EASE '15: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, April 29 (2015) p. 1 - 10
Available at: http://works.bepress.com/david_lo/172/