An Empirical Study of Relationships Among Extreme Programming Engineering Activities

Author(s): Mohammad Alshayeb and Wei Li
Venue: Information and Software Technology Volume 48, Issue 11
Date: November 2006

Type of Experiement: Case Study
Sample Size: 6
Class/Experience Level: Graduate Student

Quality
3

Reference: http://mythic.lib.calpoly.edu:2066/science?_ob=ArticleURL&_udi=B6V0B-4JD...

SUMMARY
An empirical study of relationships among extreme programming engineering activities reports the analysis of a statistical study done on data collected when two web-based client–sever systems were developed in Java using extreme programming practices. Both systems are data mining systems which provide centralized component management, allowing information browsing from different location possible. The two systems were considered good candidates for analyzing extreme programming because both systems were delivered as commercial products to a customer after being completed under extreme programming methodologies, and the customer accepted the final systems.

The paper mainly focuses on the relations among new design (adding new functionality to the system based on user “stories” identified by the customer), error fixing (using a unit test-first approach), and refactoring (changing code in such a way that simplifies code yet leaves the system’s overall behavior intact), to help provide a better understanding of extreme programming using empirical data.

The developers gathered data by keeping detailed research logs throughout the development cycles, recording daily activities. Additionally, a manual log file was created for each working day during system development, which included: the tasks planned for the current iteration cycle, a description of current progress or failure towards completing tasks, the time spent on each task, a description of the problems encountered during task execution, a description of the changes made to the system, the reasons for any system changes, and any affected classes.

The authors report that, through their statistical analysis, the more new design performed to the system, the less refactoring and error fix the programmers do in the long run. However, there is no relationship between the effort spent on refactoring and the effort spent on error fixing. The authors also report that the error fix effort is related to the number of days spent on each story of the system – meaning that the more days spent on the story cards, the more error fix effort is performed, since objectives are more clearly defined. Refactoring effort, for which there was no consistent growth trend or regular behavior, did not seem to have a relationship with the number of days spent on creating system stories. This last conclusion was based on the fact that a relation existed in one system, but not the other, so the relationship between refactoring and the number of days spent on creating system stories was inconclusive. Statistical evidence also supports the claim that refactoring does not necessarily introduce errors.

Other results include the assertion that the error fix effort has a high correlation with the number of days spent creating a user story, indicating that the more time spent on the project, the bigger and more complex the system becomes, and the more new code is introduced; hence, more tests must be written. Since error fix is a result of refactoring or new design, the longer the time spent on the story the more error fix effort will be performed.

Although many statistical claims are made in the paper, no conclusions can be made regarding which activities needs to be focused on more, which activities teams should focus on performing more often, or what the tradeoffs are of performing some activities more than others. In order to come up with such guidelines, the validity of the results need be confirmed on a much larger scope, and with much more data.

0