You Can’t Make Something Accessible to Everyone

This post’s title is unpleasant, but it’s important to acknowledge the reality of the human condition and limitations in technologies. Even purpose-built assistive tech.

Broadly, when someone says something is “accessible” that’s a hopeful statement that is based on some best efforts. Of course, there are bad actors who assert something is accessible because they just want the clicks or don’t invest much effort. Others assert their pet features are to “make something accessible” but are unable to explain how. Some even promise the technology to do iteven formerly legitimate practitioners.

There are many people who think making a web site that passes WCAG 2.2 at Level AA makes it accessible. I’ve documented many cases where that’s not true, as have others, some for years now.

If you think passing WCAG at Level AAA will be more accessible, then I know plenty of people who’d like to have a conversation with SC 1.4.6: Contrast (Enhanced) for encoding eye pain.

I keep trying to stress (with authors, clients, spec writers, GitHub randos, LLM aficionados, …) that accessibility is about people. It is not a strictly technical problem to be solved with code.

Because people have varying needs across disparate contexts from assorted expectations with unequal skill levels using almost random technologies, never mind current moods and real-life distractions, to suggest one thing will be accessible for everyone in all those circumstances is pure hubris. Or lack of empathy. Maybe a mix.

I’m not suggesting that claiming something is “accessible” is an overtly bad act. I am saying, however, that maybe you should explain what accessibility features it has, and let that guide people. It’s more honest to them and you.

This is why we keep working at accessibility. To shrink those cases. To move halfway to the wall, over and over, until we cannot slide a sheet of paper between our nose and the cold vinyl siding of reality. That’s why we keep up the work. That’s why we continue to try. To make things better for yet more people, even if we can’t cover everyone.

Stop beating yourself up and be wary of those who maybe aren’t.

2 Comments

Reply

I wanted to ask you a question, Adrian.

We know that AI is not yet able to solve web accessibility issues on its own. As you rightly point out, the human eye remains essential. I have personally tested websites that are, in practice, completely inaccessible, yet still achieve a score of 100% on Axe Monitor. Scanning tools are powerful, of course, but as you say better than anyone, when it comes to human disability, AI will probably never be able to cover all needs.

My question therefore focuses more specifically on screen readers. I have seen that there is at least one extension for NVDA that uses AI to describe content. AI is already capable of reading text contained in images, and probably also of detecting the language of content even when the corresponding tag is not specified, among many other capabilities.

Do you know whether screen readers already integrate, or plan to integrate, this type of AI to make their use easier? For example, to correct on the fly, or bypass, code errors that cause accessibility issues, and thus deliver content that is truly understandable to the user.

I am also thinking about emails. An integrated AI could help screen readers identify the main content more easily, using more natural language and tone. It could describe images even when the alt attribute is empty, analyze the content that is actually visible rather than relying solely on the DOM in order to compensate for reading order issues, or indicate the relative position of a link or a piece of text. When several links share the same label, it could add contextual or positional information, or numbering. It could also, by inference, correctly render content even when a table, for example, is not declared with role=”presentation”.

Beyond these use cases, AI could provide more global assistance to screen reader users. It could, for instance, offer a summary of a complex page or message before starting a detailed reading, an approach that already exists, as far as I know, in Google Workspace, where an email summary is displayed above the message.

Obviously, this perspective is not a wish, but rather a point of caution. If screen readers became capable, thanks to AI, of bypassing the code and rendering only the content, there would be a real risk of developers being relieved of their responsibility to produce clean and genuinely accessible code.

In the long run, if content accessibility were to rely mainly on an AI layer tasked with repairing or interpreting flawed implementations, best practices could be pushed into the background. Yet AI feeds on what it finds on the web. If accessible code were to become the exception rather than the norm, AI itself would eventually learn and reproduce inaccessible patterns. We would then enter a vicious circle, where AI compensates for shortcomings that it indirectly helps to reinforce, a kind of snake biting its own tail.

MatSol; . Permalink
In response to MatSol. Reply

Do you know whether screen readers already integrate, or plan to integrate, [the ability to describe image] to make their use easier?

JAWS, VoiceOver(s), and TalkBack all offer options to use a combination of computer vision and LLMs to describe images (I thought Narrator did and can’t speak to Orca). The accuracy, however, is questionable and I understand the consistency between queries for the same user and same image is not ideal. OCR, on the other hand, can still handle transcribing text in images reasonably well.

Broadly, the rest of your comment outlines ways “AI” could help users, but mostly what you’re discussing is LLM technology. This is a generative technology, not an analytical technology. How LLMs will address all that is largely a function of models, training data, and inputs. It will never be consistent but at least it can be entertaining and some percent of the time utterly wrong.

We would then enter a vicious circle, where AI compensates for shortcomings that it indirectly helps to reinforce, a kind of snake biting its own tail.

We’re already there. There may be a reason “slop” is the Merriam-Webster word of the year.

Leave a Reply to Adrian Roselli Cancel response

  • The form doesn’t support Markdown.
  • This form allows limited HTML.
  • Allowed HTML elements are <a href>, <blockquote>, <code>, <del>, <em>, <ins>, <q>, <strong>, and maybe some others. WordPress is fickle and randomly blocks or allows some.
  • If you want to include HTML examples in your comment, then HTML encode them. E.g. <code>&lt;div&gt;</code> (you can copy and paste that chunk).