What Do Usability Evaluators Do in Practice? An Explorative Study of Think-Aloud Testing

Author(s): Mie Nørgaard, Kasper Hornbæk
Venue: ACM Special Interest Group on Computer-Human Interaction, Proceedings of the 6th conference on Designing Interactive systems
Date: unknown

Type of Experiement: Case Study
Sample Size: 14

Quality
3

DOI: http://doi.acm.org/10.1145/1142405.1142439

The authors of this paper attempt to study the practice of think-aloud testing as a form of usability study. This is a practice widely used but rarely studied. The authors examined in detail the audio recordings of 14 think-aloud sessions. They found that "immediate analysis of observations made in the think-aloud sessions is done only sporadically, if at all. When testing, evaluators ask about hypothetical situations and seem to seek confirmation of problems that they are already aware of." The evaluators questions about hypothetical questions and user expectations rather than actual experienced problems. In addition, the evaluators often neglect utility and only ask about usability.

The experiment consisted of 14 talk-aloud sessions conducted in seven companies. The aim was to study what evaluators do in industrial practice and to help evaluators better understand the strengths and weaknesses of what they do. The data collection was as wide and open-ended as possible based on the theory that researchers should not initiate an investigation based on a list of hypotheses. The analysis was conducted by first segmenting the audio recordings and applying descriptive keywords to each segment. The researchers then re-evaluated the segments to adjust the keywords or apply new ones. Finally the researchers analyzed and tried to interpret the segments with shared keywords.

The researchers identified the following areas: analysis of the results from a session, confirmation of known issues, practical realities, questions asked during a test, laboratory-style scientific standards, and uncovering usability problems or utility concerns. The researchers found that analysis of usability concerns rarely occur immediately after a session. In fact, most session seem to focus on confirmation of known issues.

0