Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study

Author(s): Gordon Fraser, Matt Staats, Phil McMinn, Andrea Arcuri, Frank Padberg
Venue: ACM Transactions on Software Engineering and Methodology (TOSEM)
Date: August 2015

Type of Experiement: Controlled Experiment
Class/Experience Level: Undergraduate Student
Data Collection Method: Observation, Code Metric


The paper is on a controlled empirical study of automatic test generation. The authors question the believed that automation of testing will produce a benefit for the project. Auto generated tests suits produce a high test coverage but only achieving high code coverage does not necessarily mean it will improve finding bugs. The study focuses on test data collected from white box testing. For the experiment, they made two groups of testers, one writing tests manually and the other writing tests with automated test generation tool.

The results of the study showed that with the help of automated test generation tool, the testers have a significant increase in code coverage but not an improvement on the number of bugs reported by the testers. With the help of automated test generation tools, the developers need to consider the test data generated by the tool and find out how the tests influence the code. The authors believe there need to be improvements on automated test generation tools before it becomes widely adopted.