: Grigori Melnik, Frank Maurer, Mike ChiassonVenue
: Agile Conference, 2006Date
: 2006Type of Experiement
: Quasi-Controlled ExperimentSample Size
: 40Class/Experience Level
: Graduate StudentParticipant Selection
: voluntary students in two coursesData Collection Method
: Project Artifact(s)
The primary goal of this paper was to address the five following questions:
- Can customers specify functional business requirements in the form of executable acceptance tests clearly when paired with an IT professional?
- How do customers use FIT for authoring business requirements?
- What are the trends in customer-authored executable acceptance test-based specifications?
- Does a software engineering background have an effect on the quality of the executable acceptance test-based specification?
- Is executable acceptance test-driven development a satisfactory method for customers, based on their satisfaction, their intention on using it in the future, and their intention to recommend it to other colleagues?
The central hypothesis of this paper was that “Customers in partnership with an IT professional would be able to effectively specify functional requirements of the system in the form of executable acceptance tests.” To address this hypothesis, several criteria were created to establish “good” acceptance tests. The graduate students, acting as the customer on a project, were surveyed on various requirements and testing knowledge before the project began and after it finished. Undergraduate students acted as the developers on this project and worked closely with the graduate students. The customers participated in a three-hour lecture describing how to use the FIT ATDD framework and were expected to write all of the acceptance documents for the project, which the developers would use to test the actual project.
The authors of this paper determined experimentally that the sampled group of teams had a significantly higher mean on the quality of executable acceptance tests specifications than 75%. The authors did find that half of the students found it hard to learn FIT, rejecting their hypothesis that learning FIT was going to be easy with no prior experience. Test case distribution was also examined, finding that the mean number of negative testing (how a system reacts to incorrect or inappropriate information) was 6% compared to 94% positive testing. This paper also did a good job addressing the validity of this experiment.