Predicting Class Testability using Object-Oriented Metrics (2004)

Author(s): Magiel Bruntink and Arie van Deursen
Venue: Proceedings of the Fourth IEEE International Workshop on Source Code Analysis and Manipulation
Date: 2004

Type of Experiement: Case Study


Applied several software metrics against two Java systems: DocGen, a commercial source
code documentation tool in development at the Software Improvement Group and Apache
Ant, an open source automation tool for software development. Both systems were tested
at the class level by means of JUnit. DocGen consists of 90,000 lines of code, divided over
66 packages containing 640 classes with 138 having an associated test class. Ant consists of
170,000 lines of code with 887 classes divided over 87 packages with 111 having an associated
test class.

The purpose of this paper is to increase the understanding of what makes it difficult to test
a particular class. This stems from the phenomenon where class A is easy test where class
B in the similar system is much more difficult to test. The authors take several software
metrics and determine if they are correlated with their proposed test suite metrics. These
metrics are dLOCC (dependent Lines of Code for Class) and dNOTC (dependent Number
of Test Cases). Both of the Java applications test suites were developed at the preference
and choice among the developers. This means that there is no explicit testing criteria, but
instead this would be left up to the developer’s discretion when and where to inject test

The test suites were tested for correlation with the following Object-Oriented metrics:
Depth of Inheritance Tree (DIT), Fan Out (FOUT), Lack of Cohesion Of Methods (LCOM),
Lines of Code Per Class (LOCC), Number of Children (NOC), Number Of Fields (NOF),
Number of Methods (NOM), Response for Class (RFC), and Weighted Methods Per Class
(WPC). The analysis revealed that FOUT, LOCC, and RFC demonstrated a significant
correlation with the proposed test level metrics (dLOCC and dNOTC).