“Myth of Usability Testing” at ALA
There is a very good article over at A List Apart today titled “The Myth of Usability Testing.” The article starts off with an example of how multiple usability evaluation teams, given the same task and allowed to run at it as they saw fit, had far less overlap in found issues than one would hope.
The author goes on to explain why usability evaluation is unreliable with a series of examples (which seem painfully obvious, and yet these mistakes keep happening) broken into two main categories:
- “Right questions, wrong people, and vice versa.”
Using existing users to evaluate a site is a loaded approach, they already have expectations set by the current site that taint their ability to see other options. Conversely, asking new users to complete tasks driven by an existing design is not a good way to evaluate new approaches.
- “Testing and evaluation is useless without context.”
It is common for me to hear the statement that “nothing can be more than two clicks from the home page,” but this ignores the real context of the site and its users. These blanket statements or goals can harm an evaluation when instead a test should start with an understanding of user goals, success metrics, and real site goals.
From here the article outlines what usability testing is actually good for, and then helps focus the reader on the reality of the testing process and its results. I’m glossing over the other 2/3 of the article partly because I wanted to draw attention to the bits above and partly because you should just go read it already. There are some good links in the article for tools that can help identify trouble spots and support an evaluation.
The picture explains it all… When you have a hammer, every problem looks like a nail.
The examples of a Google search visitor vs. a NYTimes.com visitor nails that point home well. Usability studies are not a one-size-fits-all tool and this article explains why very well.