“Myth of Usability Testing” at ALA

There is a very good article over at A List Apart today titled “The Myth of Usability Testing.” The article starts off with an example of how multiple usability evaluation teams, given the same task and allowed to run at it as they saw fit, had far less overlap in found issues than one would hope.

The author goes on to explain why usability evaluation is unreliable with a series of examples (which seem painfully obvious, and yet these mistakes keep happening) broken into two main categories:

From here the article outlines what usability testing is actually good for, and then helps focus the reader on the reality of the testing process and its results. I’m glossing over the other 2/3 of the article partly because I wanted to draw attention to the bits above and partly because you should just go read it already. There are some good links in the article for tools that can help identify trouble spots and support an evaluation.

One Comment


The picture explains it all… When you have a hammer, every problem looks like a nail.

The examples of a Google search visitor vs. a NYTimes.com visitor nails that point home well. Usability studies are not a one-size-fits-all tool and this article explains why very well.

Leave a Comment or Response

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>