Context: Clone benchmarks are essential to the assessment and improvement of clone detection tools and algorithms. Among existing benchmarks, Bellon's benchmark is widely used by the research community. However, a serious threat to the validity of this benchmark is that reference clones it contains have been manually validated by Bellon alone. Other persons may disagree with Bellon's judgment. Objective: In this paper, we perform an empirical assessment of Bellon's benchmark. Method: We seek the opinion of eighteen participants on a subset of Bellon's benchmark to determine if researchers should trust the reference clones it contains. Results: Our experiment shows that a significant amount of the reference clones are debatable, and this phenomenon can introduce noise in results obtained using this benchmark.
- Code clone,
- Empirical study,
- Software metrics
Available at: http://works.bepress.com/david_lo/172/