A Large-Scale Empirical Comparison of Object-Oriented Cohesion Metrics

Author(s): Richard Barker, Ewan Tempero
Venue: 14th Asia-Pacific Software Engineering Conference, APSEC
Date: 2007

Type of Experiement: Case Study



This paper did a empirical study of 16 different metrics and variations of metrics over 92 open source projects containing more than 100,000 classes. This was by far the largest empirical study on cohesion metrics not only in the number of cohesion metrics used, but also the number of classes that were experimented over. The metrics used are as follows.

1. LCOM1
2. LCOM2
3. LCOM3
4. LCOM4
5. LCOM*1
6. LCOM*2
7. LCOM*3
8. COH
9. TCC
10. LCC
11. LCCd
12. LCCi
13. CC(X)
14. CBMC
15. CAMC
16. NDH

The case study does not try to evaluate the quality of each metrics but instead see the variations in the different metrics outputs in comparison to one another and to view the bimodal behavior . The true value of this paper is to help understand the outputs of a particular metric in comparison to other projects. For example, if you get an output of 1 from a metric you would like to know if that is normal for the specific metric or if that is a rare value to get, thus helping you interpret the output.