Automated WCAG Testing Is Grrreat!
I’m a big fan of using automation in WCAG testing. I use bookmarklets, dev tools, browser features & reporting, and a pile of third-party products from assorted vendors. These save me time and effort, letting me focus on more tricky cases.
But…
Unfortunately, the marketing machines of some vendors seem to be more effective than their tools.
No false positives? Cool. What about false negatives?
We all make honest mistakes, and sometimes rely too much on our tools to catch them. But they won’t. This doesn’t mean the tools are worthless, but the people behind those tools may have challenges ranging from strong (questionable) opinions to genuine technical difficulties addressing some of this. If it was easy, accessibility overlays might actually be useful instead of harmful.
57% issue coverage? Cool. What do you think “issues” are again?
If vendors are pitching based on their ability to catch X% of issues on a page, ask them to clarify. Ask them to define issues. At the very least, put them on notice that you expect the marketing message to correspond with expectations, not mis-use them.
A total solution to testing in a year? Cool. Wait, where’d you go?
I feel like when anybody makes a claim that they can guarantee conformance with all of WCAG or some other maximum accessibility target, it pushes the rest of the industry down. They enter a race to the marketing bottom as companies continue to trot out messages grounded in what they want their prospects to hear, not reality of the challenges in helping people.
At some level, we need to ask about the ethics of making these kinds of claims. Then ask if we want to work with the companies and people making those claims.
Anyway…
My testing showed the most popular automated testing tool is the worst performer and others weren’t better enough to matter:
In my manual review I found almost seven-and-a-half times (7½×) more issues than the tool with the next highest set of found issues across three times (3×) as many Success Criteria.
Because…
Over a decade ago Karl outlined what parts of WCAG can’t be automated:
As the tables above show, there are 9 WCAG Success Criterion (in Level A and AA) that cannot be tested for in any meaningful manner using a tool. There are another 13 that can be tested for automatically but require a human to verify. Full compliance and risk mitigation always requires the involvement of a skilled professional reviewer, even when you have a tool as well.
Just last week Steve did the same:
Here is a list of WCAG 2.2 Level A and AA success criteria that I think cannot be completely tested with automated tools. These criteria require manual testing because they involve meaning, usability, intent, or user experience that automated tools cannot fully evaluate.
He provided a complete list in his follow-up post, mind the WCAG automation gap.
So…
Use the tools. But understand their limitations. Probably ignore their marketing claims. Judge the vendors making those claims. Definitely don’t be their shill.
If you don’t have the skills to evaluate these tools, I strongly suggest you seek an independent consultant who can help you. A consultant who is not a shill.
I have strong opinions. They don’t need to be yours. But I hope yours are informed.
Leave a Comment or Response