The primary goal of this paper was to address the five following questions:
The central hypothesis of this paper was that “Customers in partnership with an IT professional would be able to effectively specify functional requirements of the system in the form of executable acceptance tests.” To address this hypothesis, several criteria were created to establish “good” acceptance tests. The graduate students, acting as the customer on a project, were surveyed on various requirements and testing knowledge before the project began and after it finished. Undergraduate students acted as the developers on this project and worked closely with the graduate students. The customers participated in a three-hour lecture describing how to use the FIT ATDD framework and were expected to write all of the acceptance documents for the project, which the developers would use to test the actual project.
The authors of this paper determined experimentally that the sampled group of teams had a significantly higher mean on the quality of executable acceptance tests specifications than 75%. The authors did find that half of the students found it hard to learn FIT, rejecting their hypothesis that learning FIT was going to be easy with no prior experience. Test case distribution was also examined, finding that the mean number of negative testing (how a system reacts to incorrect or inappropriate information) was 6% compared to 94% positive testing. This paper also did a good job addressing the validity of this experiment.